00:00:00.000 Started by upstream project "autotest-nightly" build number 4363 00:00:00.000 originally caused by: 00:00:00.000 Started by upstream project "nightly-trigger" build number 3726 00:00:00.000 originally caused by: 00:00:00.000 Started by timer 00:00:00.168 Checking out git https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool into /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4 to read jbp/jenkins/jjb-config/jobs/autotest-downstream/autotest-vg.groovy 00:00:00.169 The recommended git tool is: git 00:00:00.169 using credential 00000000-0000-0000-0000-000000000002 00:00:00.171 > git rev-parse --resolve-git-dir /var/jenkins_home/workspace/nvme-vg-autotest_script/33b20b30f0a51e6b52980845e0f6aa336787973ad45e341fbbf98d1b65b265d4/jbp/.git # timeout=10 00:00:00.223 Fetching changes from the remote Git repository 00:00:00.225 > git config remote.origin.url https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool # timeout=10 00:00:00.271 Using shallow fetch with depth 1 00:00:00.271 Fetching upstream changes from https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool 00:00:00.271 > git --version # timeout=10 00:00:00.304 > git --version # 'git version 2.39.2' 00:00:00.304 using GIT_ASKPASS to set credentials SPDKCI HTTPS Credentials 00:00:00.325 Setting http proxy: proxy-dmz.intel.com:911 00:00:00.325 > git fetch --tags --force --progress --depth=1 -- https://review.spdk.io/gerrit/a/build_pool/jenkins_build_pool refs/heads/master # timeout=5 00:00:07.143 > git rev-parse origin/FETCH_HEAD^{commit} # timeout=10 00:00:07.156 > git rev-parse FETCH_HEAD^{commit} # timeout=10 00:00:07.167 Checking out Revision db4637e8b949f278f369ec13f70585206ccd9507 (FETCH_HEAD) 00:00:07.168 > git config core.sparsecheckout # timeout=10 00:00:07.177 > git read-tree -mu HEAD # timeout=10 00:00:07.193 > git checkout -f db4637e8b949f278f369ec13f70585206ccd9507 # timeout=5 00:00:07.210 Commit message: "jenkins/jjb-config: Add missing SPDK_TEST_NVME_INTERRUPT flag" 00:00:07.210 > git rev-list --no-walk db4637e8b949f278f369ec13f70585206ccd9507 # timeout=10 00:00:07.292 [Pipeline] Start of Pipeline 00:00:07.305 [Pipeline] library 00:00:07.307 Loading library shm_lib@master 00:00:07.307 Library shm_lib@master is cached. Copying from home. 00:00:07.322 [Pipeline] node 00:00:07.341 Running on VM-host-SM38 in /var/jenkins/workspace/nvme-vg-autotest 00:00:07.343 [Pipeline] { 00:00:07.352 [Pipeline] catchError 00:00:07.353 [Pipeline] { 00:00:07.362 [Pipeline] wrap 00:00:07.367 [Pipeline] { 00:00:07.375 [Pipeline] stage 00:00:07.376 [Pipeline] { (Prologue) 00:00:07.395 [Pipeline] echo 00:00:07.396 Node: VM-host-SM38 00:00:07.402 [Pipeline] cleanWs 00:00:07.414 [WS-CLEANUP] Deleting project workspace... 00:00:07.414 [WS-CLEANUP] Deferred wipeout is used... 00:00:07.422 [WS-CLEANUP] done 00:00:07.626 [Pipeline] setCustomBuildProperty 00:00:07.694 [Pipeline] httpRequest 00:00:08.027 [Pipeline] echo 00:00:08.029 Sorcerer 10.211.164.20 is alive 00:00:08.036 [Pipeline] retry 00:00:08.037 [Pipeline] { 00:00:08.050 [Pipeline] httpRequest 00:00:08.055 HttpMethod: GET 00:00:08.055 URL: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.056 Sending request to url: http://10.211.164.20/packages/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:08.071 Response Code: HTTP/1.1 200 OK 00:00:08.071 Success: Status code 200 is in the accepted range: 200,404 00:00:08.072 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.197 [Pipeline] } 00:00:13.214 [Pipeline] // retry 00:00:13.221 [Pipeline] sh 00:00:13.507 + tar --no-same-owner -xf jbp_db4637e8b949f278f369ec13f70585206ccd9507.tar.gz 00:00:13.524 [Pipeline] httpRequest 00:00:13.895 [Pipeline] echo 00:00:13.898 Sorcerer 10.211.164.20 is alive 00:00:13.910 [Pipeline] retry 00:00:13.918 [Pipeline] { 00:00:13.965 [Pipeline] httpRequest 00:00:13.981 HttpMethod: GET 00:00:13.982 URL: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:13.983 Sending request to url: http://10.211.164.20/packages/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:00:14.006 Response Code: HTTP/1.1 200 OK 00:00:14.006 Success: Status code 200 is in the accepted range: 200,404 00:00:14.007 Saving response body to /var/jenkins/workspace/nvme-vg-autotest/spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:01.930 [Pipeline] } 00:01:01.947 [Pipeline] // retry 00:01:01.955 [Pipeline] sh 00:01:02.245 + tar --no-same-owner -xf spdk_e01cb43b8578f9155d07a9bc6eee4e70a3af96b0.tar.gz 00:01:05.570 [Pipeline] sh 00:01:05.857 + git -C spdk log --oneline -n5 00:01:05.857 e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:05.857 d58eef2a2 nvme/rdma: Fix reinserting qpair in connecting list after stale state 00:01:05.857 2104eacf0 test/check_so_deps: use VERSION to look for prior tags 00:01:05.857 66289a6db build: use VERSION file for storing version 00:01:05.857 626389917 nvme/rdma: Don't limit max_sge if UMR is used 00:01:05.877 [Pipeline] writeFile 00:01:05.892 [Pipeline] sh 00:01:06.182 + jbp/jenkins/jjb-config/jobs/scripts/autorun_quirks.sh 00:01:06.194 [Pipeline] sh 00:01:06.472 + cat autorun-spdk.conf 00:01:06.472 SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.472 SPDK_TEST_NVME=1 00:01:06.472 SPDK_TEST_FTL=1 00:01:06.472 SPDK_TEST_ISAL=1 00:01:06.472 SPDK_RUN_ASAN=1 00:01:06.472 SPDK_RUN_UBSAN=1 00:01:06.472 SPDK_TEST_XNVME=1 00:01:06.472 SPDK_TEST_NVME_FDP=1 00:01:06.472 SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.478 RUN_NIGHTLY=1 00:01:06.480 [Pipeline] } 00:01:06.494 [Pipeline] // stage 00:01:06.507 [Pipeline] stage 00:01:06.509 [Pipeline] { (Run VM) 00:01:06.521 [Pipeline] sh 00:01:06.801 + jbp/jenkins/jjb-config/jobs/scripts/prepare_nvme.sh 00:01:06.801 + echo 'Start stage prepare_nvme.sh' 00:01:06.801 Start stage prepare_nvme.sh 00:01:06.801 + [[ -n 9 ]] 00:01:06.801 + disk_prefix=ex9 00:01:06.801 + [[ -n /var/jenkins/workspace/nvme-vg-autotest ]] 00:01:06.801 + [[ -e /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf ]] 00:01:06.801 + source /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf 00:01:06.801 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:06.801 ++ SPDK_TEST_NVME=1 00:01:06.801 ++ SPDK_TEST_FTL=1 00:01:06.801 ++ SPDK_TEST_ISAL=1 00:01:06.801 ++ SPDK_RUN_ASAN=1 00:01:06.801 ++ SPDK_RUN_UBSAN=1 00:01:06.801 ++ SPDK_TEST_XNVME=1 00:01:06.801 ++ SPDK_TEST_NVME_FDP=1 00:01:06.801 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:06.801 ++ RUN_NIGHTLY=1 00:01:06.801 + cd /var/jenkins/workspace/nvme-vg-autotest 00:01:06.801 + nvme_files=() 00:01:06.801 + declare -A nvme_files 00:01:06.801 + backend_dir=/var/lib/libvirt/images/backends 00:01:06.801 + nvme_files['nvme.img']=5G 00:01:06.801 + nvme_files['nvme-cmb.img']=5G 00:01:06.801 + nvme_files['nvme-multi0.img']=4G 00:01:06.801 + nvme_files['nvme-multi1.img']=4G 00:01:06.801 + nvme_files['nvme-multi2.img']=4G 00:01:06.801 + nvme_files['nvme-openstack.img']=8G 00:01:06.801 + nvme_files['nvme-zns.img']=5G 00:01:06.801 + (( SPDK_TEST_NVME_PMR == 1 )) 00:01:06.801 + (( SPDK_TEST_FTL == 1 )) 00:01:06.801 + nvme_files["nvme-ftl.img"]=6G 00:01:06.801 + (( SPDK_TEST_NVME_FDP == 1 )) 00:01:06.801 + nvme_files["nvme-fdp.img"]=1G 00:01:06.801 + [[ ! -d /var/lib/libvirt/images/backends ]] 00:01:06.801 + for nvme in "${!nvme_files[@]}" 00:01:06.801 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi2.img -s 4G 00:01:06.801 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi2.img', fmt=raw size=4294967296 preallocation=falloc 00:01:06.801 + for nvme in "${!nvme_files[@]}" 00:01:06.801 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-ftl.img -s 6G 00:01:06.801 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-ftl.img', fmt=raw size=6442450944 preallocation=falloc 00:01:06.801 + for nvme in "${!nvme_files[@]}" 00:01:06.801 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-cmb.img -s 5G 00:01:07.060 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-cmb.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.060 + for nvme in "${!nvme_files[@]}" 00:01:07.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-openstack.img -s 8G 00:01:07.060 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-openstack.img', fmt=raw size=8589934592 preallocation=falloc 00:01:07.060 + for nvme in "${!nvme_files[@]}" 00:01:07.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-zns.img -s 5G 00:01:07.060 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-zns.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.060 + for nvme in "${!nvme_files[@]}" 00:01:07.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi1.img -s 4G 00:01:07.060 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi1.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.060 + for nvme in "${!nvme_files[@]}" 00:01:07.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-multi0.img -s 4G 00:01:07.060 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-multi0.img', fmt=raw size=4294967296 preallocation=falloc 00:01:07.060 + for nvme in "${!nvme_files[@]}" 00:01:07.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme-fdp.img -s 1G 00:01:07.060 Formatting '/var/lib/libvirt/images/backends/ex9-nvme-fdp.img', fmt=raw size=1073741824 preallocation=falloc 00:01:07.060 + for nvme in "${!nvme_files[@]}" 00:01:07.060 + sudo -E spdk/scripts/vagrant/create_nvme_img.sh -n /var/lib/libvirt/images/backends/ex9-nvme.img -s 5G 00:01:07.319 Formatting '/var/lib/libvirt/images/backends/ex9-nvme.img', fmt=raw size=5368709120 preallocation=falloc 00:01:07.319 ++ sudo grep -rl ex9-nvme.img /etc/libvirt/qemu 00:01:07.319 + echo 'End stage prepare_nvme.sh' 00:01:07.319 End stage prepare_nvme.sh 00:01:07.329 [Pipeline] sh 00:01:07.608 + DISTRO=fedora39 00:01:07.608 + CPUS=10 00:01:07.608 + RAM=12288 00:01:07.608 + jbp/jenkins/jjb-config/jobs/scripts/vagrant_create_vm.sh 00:01:07.608 Setup: -n 10 -s 12288 -x -p libvirt --qemu-emulator=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 --nic-model=e1000 -b /var/lib/libvirt/images/backends/ex9-nvme-ftl.img,nvme,,,,,true -b /var/lib/libvirt/images/backends/ex9-nvme.img -b /var/lib/libvirt/images/backends/ex9-nvme-multi0.img,nvme,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img -b /var/lib/libvirt/images/backends/ex9-nvme-fdp.img,nvme,,,,,,on -H -a -v -f fedora39 00:01:07.608 00:01:07.608 DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant 00:01:07.608 SPDK_DIR=/var/jenkins/workspace/nvme-vg-autotest/spdk 00:01:07.608 VAGRANT_TARGET=/var/jenkins/workspace/nvme-vg-autotest 00:01:07.608 HELP=0 00:01:07.608 DRY_RUN=0 00:01:07.608 NVME_FILE=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,/var/lib/libvirt/images/backends/ex9-nvme.img,/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,/var/lib/libvirt/images/backends/ex9-nvme-fdp.img, 00:01:07.608 NVME_DISKS_TYPE=nvme,nvme,nvme,nvme, 00:01:07.608 NVME_AUTO_CREATE=0 00:01:07.608 NVME_DISKS_NAMESPACES=,,/var/lib/libvirt/images/backends/ex9-nvme-multi1.img:/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,, 00:01:07.608 NVME_CMB=,,,, 00:01:07.608 NVME_PMR=,,,, 00:01:07.608 NVME_ZNS=,,,, 00:01:07.608 NVME_MS=true,,,, 00:01:07.608 NVME_FDP=,,,on, 00:01:07.608 SPDK_VAGRANT_DISTRO=fedora39 00:01:07.608 SPDK_VAGRANT_VMCPU=10 00:01:07.608 SPDK_VAGRANT_VMRAM=12288 00:01:07.608 SPDK_VAGRANT_PROVIDER=libvirt 00:01:07.608 SPDK_VAGRANT_HTTP_PROXY= 00:01:07.608 SPDK_QEMU_EMULATOR=/usr/local/qemu/vanilla-v8.0.0/bin/qemu-system-x86_64 00:01:07.608 SPDK_OPENSTACK_NETWORK=0 00:01:07.608 VAGRANT_PACKAGE_BOX=0 00:01:07.608 VAGRANTFILE=/var/jenkins/workspace/nvme-vg-autotest/spdk/scripts/vagrant/Vagrantfile 00:01:07.608 FORCE_DISTRO=true 00:01:07.608 VAGRANT_BOX_VERSION= 00:01:07.608 EXTRA_VAGRANTFILES= 00:01:07.608 NIC_MODEL=e1000 00:01:07.608 00:01:07.608 mkdir: created directory '/var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt' 00:01:07.608 /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt /var/jenkins/workspace/nvme-vg-autotest 00:01:10.150 Bringing machine 'default' up with 'libvirt' provider... 00:01:10.410 ==> default: Creating image (snapshot of base box volume). 00:01:10.671 ==> default: Creating domain with the following settings... 00:01:10.671 ==> default: -- Name: fedora39-39-1.5-1721788873-2326_default_1734227795_3bbd1dd99b7f36584a26 00:01:10.671 ==> default: -- Domain type: kvm 00:01:10.671 ==> default: -- Cpus: 10 00:01:10.671 ==> default: -- Feature: acpi 00:01:10.671 ==> default: -- Feature: apic 00:01:10.671 ==> default: -- Feature: pae 00:01:10.671 ==> default: -- Memory: 12288M 00:01:10.671 ==> default: -- Memory Backing: hugepages: 00:01:10.671 ==> default: -- Management MAC: 00:01:10.671 ==> default: -- Loader: 00:01:10.671 ==> default: -- Nvram: 00:01:10.671 ==> default: -- Base box: spdk/fedora39 00:01:10.671 ==> default: -- Storage pool: default 00:01:10.671 ==> default: -- Image: /var/lib/libvirt/images/fedora39-39-1.5-1721788873-2326_default_1734227795_3bbd1dd99b7f36584a26.img (20G) 00:01:10.671 ==> default: -- Volume Cache: default 00:01:10.671 ==> default: -- Kernel: 00:01:10.671 ==> default: -- Initrd: 00:01:10.671 ==> default: -- Graphics Type: vnc 00:01:10.671 ==> default: -- Graphics Port: -1 00:01:10.671 ==> default: -- Graphics IP: 127.0.0.1 00:01:10.671 ==> default: -- Graphics Password: Not defined 00:01:10.671 ==> default: -- Video Type: cirrus 00:01:10.671 ==> default: -- Video VRAM: 9216 00:01:10.671 ==> default: -- Sound Type: 00:01:10.671 ==> default: -- Keymap: en-us 00:01:10.671 ==> default: -- TPM Path: 00:01:10.671 ==> default: -- INPUT: type=mouse, bus=ps2 00:01:10.671 ==> default: -- Command line args: 00:01:10.671 ==> default: -> value=-device, 00:01:10.671 ==> default: -> value=nvme,id=nvme-0,serial=12340,addr=0x10, 00:01:10.671 ==> default: -> value=-drive, 00:01:10.671 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-ftl.img,if=none,id=nvme-0-drive0, 00:01:10.671 ==> default: -> value=-device, 00:01:10.671 ==> default: -> value=nvme-ns,drive=nvme-0-drive0,bus=nvme-0,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096,ms=64, 00:01:10.671 ==> default: -> value=-device, 00:01:10.671 ==> default: -> value=nvme,id=nvme-1,serial=12341,addr=0x11, 00:01:10.671 ==> default: -> value=-drive, 00:01:10.671 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme.img,if=none,id=nvme-1-drive0, 00:01:10.671 ==> default: -> value=-device, 00:01:10.671 ==> default: -> value=nvme-ns,drive=nvme-1-drive0,bus=nvme-1,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.671 ==> default: -> value=-device, 00:01:10.671 ==> default: -> value=nvme,id=nvme-2,serial=12342,addr=0x12, 00:01:10.671 ==> default: -> value=-drive, 00:01:10.671 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi0.img,if=none,id=nvme-2-drive0, 00:01:10.671 ==> default: -> value=-device, 00:01:10.671 ==> default: -> value=nvme-ns,drive=nvme-2-drive0,bus=nvme-2,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.671 ==> default: -> value=-drive, 00:01:10.671 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi1.img,if=none,id=nvme-2-drive1, 00:01:10.671 ==> default: -> value=-device, 00:01:10.671 ==> default: -> value=nvme-ns,drive=nvme-2-drive1,bus=nvme-2,nsid=2,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.671 ==> default: -> value=-drive, 00:01:10.671 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-multi2.img,if=none,id=nvme-2-drive2, 00:01:10.671 ==> default: -> value=-device, 00:01:10.671 ==> default: -> value=nvme-ns,drive=nvme-2-drive2,bus=nvme-2,nsid=3,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.672 ==> default: -> value=-device, 00:01:10.672 ==> default: -> value=nvme-subsys,id=fdp-subsys3,fdp=on,fdp.runs=96M,fdp.nrg=2,fdp.nruh=8, 00:01:10.672 ==> default: -> value=-device, 00:01:10.672 ==> default: -> value=nvme,id=nvme-3,serial=12343,addr=0x13,subsys=fdp-subsys3, 00:01:10.672 ==> default: -> value=-drive, 00:01:10.672 ==> default: -> value=format=raw,file=/var/lib/libvirt/images/backends/ex9-nvme-fdp.img,if=none,id=nvme-3-drive0, 00:01:10.672 ==> default: -> value=-device, 00:01:10.672 ==> default: -> value=nvme-ns,drive=nvme-3-drive0,bus=nvme-3,nsid=1,zoned=false,logical_block_size=4096,physical_block_size=4096, 00:01:10.933 ==> default: Creating shared folders metadata... 00:01:10.933 ==> default: Starting domain. 00:01:12.874 ==> default: Waiting for domain to get an IP address... 00:01:30.998 ==> default: Waiting for SSH to become available... 00:01:30.998 ==> default: Configuring and enabling network interfaces... 00:01:33.546 default: SSH address: 192.168.121.65:22 00:01:33.546 default: SSH username: vagrant 00:01:33.546 default: SSH auth method: private key 00:01:35.484 ==> default: Rsyncing folder: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/spdk/ => /home/vagrant/spdk_repo/spdk 00:01:43.630 ==> default: Mounting SSHFS shared folder... 00:01:45.549 ==> default: Mounting folder via SSHFS: /mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output => /home/vagrant/spdk_repo/output 00:01:45.549 ==> default: Checking Mount.. 00:01:46.935 ==> default: Folder Successfully Mounted! 00:01:46.935 00:01:46.935 SUCCESS! 00:01:46.935 00:01:46.935 cd to /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt and type "vagrant ssh" to use. 00:01:46.935 Use vagrant "suspend" and vagrant "resume" to stop and start. 00:01:46.935 Use vagrant "destroy" followed by "rm -rf /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt" to destroy all trace of vm. 00:01:46.935 00:01:46.947 [Pipeline] } 00:01:46.963 [Pipeline] // stage 00:01:46.998 [Pipeline] dir 00:01:46.998 Running in /var/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt 00:01:47.000 [Pipeline] { 00:01:47.012 [Pipeline] catchError 00:01:47.014 [Pipeline] { 00:01:47.029 [Pipeline] sh 00:01:47.363 + vagrant ssh-config --host vagrant 00:01:47.363 + sed -ne '/^Host/,$p' 00:01:47.363 + tee ssh_conf 00:01:49.916 Host vagrant 00:01:49.916 HostName 192.168.121.65 00:01:49.916 User vagrant 00:01:49.916 Port 22 00:01:49.916 UserKnownHostsFile /dev/null 00:01:49.916 StrictHostKeyChecking no 00:01:49.916 PasswordAuthentication no 00:01:49.916 IdentityFile /var/lib/libvirt/images/.vagrant.d/boxes/spdk-VAGRANTSLASH-fedora39/39-1.5-1721788873-2326/libvirt/fedora39 00:01:49.916 IdentitiesOnly yes 00:01:49.916 LogLevel FATAL 00:01:49.916 ForwardAgent yes 00:01:49.916 ForwardX11 yes 00:01:49.916 00:01:49.931 [Pipeline] withEnv 00:01:49.933 [Pipeline] { 00:01:49.947 [Pipeline] sh 00:01:50.232 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant '#!/bin/bash 00:01:50.232 source /etc/os-release 00:01:50.232 [[ -e /image.version ]] && img=$(< /image.version) 00:01:50.232 # Minimal, systemd-like check. 00:01:50.232 if [[ -e /.dockerenv ]]; then 00:01:50.232 # Clear garbage from the node'\''s name: 00:01:50.232 # agt-er_autotest_547-896 -> autotest_547-896 00:01:50.232 # $HOSTNAME is the actual container id 00:01:50.232 agent=$HOSTNAME@${DOCKER_SWARM_PLUGIN_JENKINS_AGENT_NAME#*_} 00:01:50.232 if grep -q "/etc/hostname" /proc/self/mountinfo; then 00:01:50.232 # We can assume this is a mount from a host where container is running, 00:01:50.232 # so fetch its hostname to easily identify the target swarm worker. 00:01:50.232 container="$(< /etc/hostname) ($agent)" 00:01:50.232 else 00:01:50.232 # Fallback 00:01:50.232 container=$agent 00:01:50.232 fi 00:01:50.232 fi 00:01:50.232 echo "${NAME} ${VERSION_ID}|$(uname -r)|${img:-N/A}|${container:-N/A}" 00:01:50.232 ' 00:01:50.505 [Pipeline] } 00:01:50.515 [Pipeline] // withEnv 00:01:50.522 [Pipeline] setCustomBuildProperty 00:01:50.534 [Pipeline] stage 00:01:50.535 [Pipeline] { (Tests) 00:01:50.549 [Pipeline] sh 00:01:50.831 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/autoruner.sh vagrant@vagrant:./ 00:01:51.108 [Pipeline] sh 00:01:51.393 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/jbp/jenkins/jjb-config/jobs/scripts/pkgdep-autoruner.sh vagrant@vagrant:./ 00:01:51.669 [Pipeline] timeout 00:01:51.669 Timeout set to expire in 50 min 00:01:51.671 [Pipeline] { 00:01:51.685 [Pipeline] sh 00:01:51.970 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'git -C spdk_repo/spdk reset --hard' 00:01:52.540 HEAD is now at e01cb43b8 mk/spdk.common.mk sed the minor version 00:01:52.555 [Pipeline] sh 00:01:52.841 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'sudo chown vagrant:vagrant spdk_repo' 00:01:53.118 [Pipeline] sh 00:01:53.402 + scp -F ssh_conf -r /var/jenkins/workspace/nvme-vg-autotest/autorun-spdk.conf vagrant@vagrant:spdk_repo 00:01:53.679 [Pipeline] sh 00:01:53.964 + /usr/local/bin/ssh -t -F ssh_conf vagrant@vagrant 'JOB_BASE_NAME=nvme-vg-autotest ./autoruner.sh spdk_repo' 00:01:54.226 ++ readlink -f spdk_repo 00:01:54.226 + DIR_ROOT=/home/vagrant/spdk_repo 00:01:54.226 + [[ -n /home/vagrant/spdk_repo ]] 00:01:54.226 + DIR_SPDK=/home/vagrant/spdk_repo/spdk 00:01:54.226 + DIR_OUTPUT=/home/vagrant/spdk_repo/output 00:01:54.226 + [[ -d /home/vagrant/spdk_repo/spdk ]] 00:01:54.226 + [[ ! -d /home/vagrant/spdk_repo/output ]] 00:01:54.226 + [[ -d /home/vagrant/spdk_repo/output ]] 00:01:54.226 + [[ nvme-vg-autotest == pkgdep-* ]] 00:01:54.226 + cd /home/vagrant/spdk_repo 00:01:54.226 + source /etc/os-release 00:01:54.226 ++ NAME='Fedora Linux' 00:01:54.226 ++ VERSION='39 (Cloud Edition)' 00:01:54.226 ++ ID=fedora 00:01:54.226 ++ VERSION_ID=39 00:01:54.226 ++ VERSION_CODENAME= 00:01:54.226 ++ PLATFORM_ID=platform:f39 00:01:54.226 ++ PRETTY_NAME='Fedora Linux 39 (Cloud Edition)' 00:01:54.226 ++ ANSI_COLOR='0;38;2;60;110;180' 00:01:54.226 ++ LOGO=fedora-logo-icon 00:01:54.226 ++ CPE_NAME=cpe:/o:fedoraproject:fedora:39 00:01:54.226 ++ HOME_URL=https://fedoraproject.org/ 00:01:54.226 ++ DOCUMENTATION_URL=https://docs.fedoraproject.org/en-US/fedora/f39/system-administrators-guide/ 00:01:54.226 ++ SUPPORT_URL=https://ask.fedoraproject.org/ 00:01:54.226 ++ BUG_REPORT_URL=https://bugzilla.redhat.com/ 00:01:54.226 ++ REDHAT_BUGZILLA_PRODUCT=Fedora 00:01:54.226 ++ REDHAT_BUGZILLA_PRODUCT_VERSION=39 00:01:54.226 ++ REDHAT_SUPPORT_PRODUCT=Fedora 00:01:54.226 ++ REDHAT_SUPPORT_PRODUCT_VERSION=39 00:01:54.226 ++ SUPPORT_END=2024-11-12 00:01:54.226 ++ VARIANT='Cloud Edition' 00:01:54.226 ++ VARIANT_ID=cloud 00:01:54.226 + uname -a 00:01:54.226 Linux fedora39-cloud-1721788873-2326 6.8.9-200.fc39.x86_64 #1 SMP PREEMPT_DYNAMIC Wed Jul 24 03:04:40 UTC 2024 x86_64 GNU/Linux 00:01:54.226 + sudo /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:01:54.487 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:01:54.777 Hugepages 00:01:54.777 node hugesize free / total 00:01:54.777 node0 1048576kB 0 / 0 00:01:54.777 node0 2048kB 0 / 0 00:01:54.777 00:01:54.777 Type BDF Vendor Device NUMA Driver Device Block devices 00:01:54.777 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:01:54.777 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:01:54.777 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:01:54.777 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:01:55.048 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:01:55.048 + rm -f /tmp/spdk-ld-path 00:01:55.048 + source autorun-spdk.conf 00:01:55.048 ++ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.048 ++ SPDK_TEST_NVME=1 00:01:55.048 ++ SPDK_TEST_FTL=1 00:01:55.048 ++ SPDK_TEST_ISAL=1 00:01:55.048 ++ SPDK_RUN_ASAN=1 00:01:55.048 ++ SPDK_RUN_UBSAN=1 00:01:55.048 ++ SPDK_TEST_XNVME=1 00:01:55.048 ++ SPDK_TEST_NVME_FDP=1 00:01:55.048 ++ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.048 ++ RUN_NIGHTLY=1 00:01:55.048 + (( SPDK_TEST_NVME_CMB == 1 || SPDK_TEST_NVME_PMR == 1 )) 00:01:55.048 + [[ -n '' ]] 00:01:55.048 + sudo git config --global --add safe.directory /home/vagrant/spdk_repo/spdk 00:01:55.048 + for M in /var/spdk/build-*-manifest.txt 00:01:55.048 + [[ -f /var/spdk/build-kernel-manifest.txt ]] 00:01:55.048 + cp /var/spdk/build-kernel-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.048 + for M in /var/spdk/build-*-manifest.txt 00:01:55.048 + [[ -f /var/spdk/build-pkg-manifest.txt ]] 00:01:55.048 + cp /var/spdk/build-pkg-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.048 + for M in /var/spdk/build-*-manifest.txt 00:01:55.048 + [[ -f /var/spdk/build-repo-manifest.txt ]] 00:01:55.048 + cp /var/spdk/build-repo-manifest.txt /home/vagrant/spdk_repo/output/ 00:01:55.048 ++ uname 00:01:55.048 + [[ Linux == \L\i\n\u\x ]] 00:01:55.048 + sudo dmesg -T 00:01:55.048 + sudo dmesg --clear 00:01:55.048 + dmesg_pid=5029 00:01:55.048 + [[ Fedora Linux == FreeBSD ]] 00:01:55.048 + export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.048 + UNBIND_ENTIRE_IOMMU_GROUP=yes 00:01:55.048 + [[ -e /var/spdk/dependencies/vhost/spdk_test_image.qcow2 ]] 00:01:55.048 + [[ -x /usr/src/fio-static/fio ]] 00:01:55.048 + sudo dmesg -Tw 00:01:55.048 + export FIO_BIN=/usr/src/fio-static/fio 00:01:55.048 + FIO_BIN=/usr/src/fio-static/fio 00:01:55.048 + [[ '' == \/\q\e\m\u\_\v\f\i\o\/* ]] 00:01:55.048 + [[ ! -v VFIO_QEMU_BIN ]] 00:01:55.048 + [[ -e /usr/local/qemu/vfio-user-latest ]] 00:01:55.048 + export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.048 + VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:01:55.048 + [[ -e /usr/local/qemu/vanilla-latest ]] 00:01:55.048 + export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.048 + QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:01:55.048 + spdk/autorun.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.048 01:57:19 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:55.048 01:57:19 -- spdk/autorun.sh@20 -- $ source /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@1 -- $ SPDK_RUN_FUNCTIONAL_TEST=1 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@2 -- $ SPDK_TEST_NVME=1 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@3 -- $ SPDK_TEST_FTL=1 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@4 -- $ SPDK_TEST_ISAL=1 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@5 -- $ SPDK_RUN_ASAN=1 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@6 -- $ SPDK_RUN_UBSAN=1 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@7 -- $ SPDK_TEST_XNVME=1 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@8 -- $ SPDK_TEST_NVME_FDP=1 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@9 -- $ SPDK_ABI_DIR=/home/vagrant/spdk_repo/spdk-abi 00:01:55.048 01:57:19 -- spdk_repo/autorun-spdk.conf@10 -- $ RUN_NIGHTLY=1 00:01:55.048 01:57:19 -- spdk/autorun.sh@22 -- $ trap 'timing_finish || exit 1' EXIT 00:01:55.048 01:57:19 -- spdk/autorun.sh@25 -- $ /home/vagrant/spdk_repo/spdk/autobuild.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:01:55.310 01:57:19 -- common/autotest_common.sh@1710 -- $ [[ n == y ]] 00:01:55.310 01:57:19 -- common/autobuild_common.sh@15 -- $ source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:01:55.310 01:57:19 -- scripts/common.sh@15 -- $ shopt -s extglob 00:01:55.310 01:57:19 -- scripts/common.sh@544 -- $ [[ -e /bin/wpdk_common.sh ]] 00:01:55.310 01:57:19 -- scripts/common.sh@552 -- $ [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:01:55.310 01:57:19 -- scripts/common.sh@553 -- $ source /etc/opt/spdk-pkgdep/paths/export.sh 00:01:55.310 01:57:19 -- paths/export.sh@2 -- $ PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.310 01:57:19 -- paths/export.sh@3 -- $ PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.310 01:57:19 -- paths/export.sh@4 -- $ PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.310 01:57:19 -- paths/export.sh@5 -- $ export PATH 00:01:55.310 01:57:19 -- paths/export.sh@6 -- $ echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/home/vagrant/.local/bin:/home/vagrant/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:01:55.310 01:57:19 -- common/autobuild_common.sh@492 -- $ out=/home/vagrant/spdk_repo/spdk/../output 00:01:55.310 01:57:19 -- common/autobuild_common.sh@493 -- $ date +%s 00:01:55.310 01:57:19 -- common/autobuild_common.sh@493 -- $ mktemp -dt spdk_1734227839.XXXXXX 00:01:55.310 01:57:19 -- common/autobuild_common.sh@493 -- $ SPDK_WORKSPACE=/tmp/spdk_1734227839.kr4moE 00:01:55.310 01:57:19 -- common/autobuild_common.sh@495 -- $ [[ -n '' ]] 00:01:55.310 01:57:19 -- common/autobuild_common.sh@499 -- $ '[' -n '' ']' 00:01:55.310 01:57:19 -- common/autobuild_common.sh@502 -- $ scanbuild_exclude='--exclude /home/vagrant/spdk_repo/spdk/dpdk/' 00:01:55.310 01:57:19 -- common/autobuild_common.sh@506 -- $ scanbuild_exclude+=' --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp' 00:01:55.310 01:57:19 -- common/autobuild_common.sh@508 -- $ scanbuild='scan-build -o /home/vagrant/spdk_repo/spdk/../output/scan-build-tmp --exclude /home/vagrant/spdk_repo/spdk/dpdk/ --exclude /home/vagrant/spdk_repo/spdk/xnvme --exclude /tmp --status-bugs' 00:01:55.310 01:57:19 -- common/autobuild_common.sh@509 -- $ get_config_params 00:01:55.310 01:57:19 -- common/autotest_common.sh@409 -- $ xtrace_disable 00:01:55.310 01:57:19 -- common/autotest_common.sh@10 -- $ set +x 00:01:55.310 01:57:19 -- common/autobuild_common.sh@509 -- $ config_params='--enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme' 00:01:55.310 01:57:19 -- common/autobuild_common.sh@511 -- $ start_monitor_resources 00:01:55.310 01:57:19 -- pm/common@17 -- $ local monitor 00:01:55.310 01:57:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.310 01:57:19 -- pm/common@19 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:01:55.310 01:57:19 -- pm/common@25 -- $ sleep 1 00:01:55.310 01:57:19 -- pm/common@21 -- $ date +%s 00:01:55.310 01:57:19 -- pm/common@21 -- $ date +%s 00:01:55.310 01:57:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734227839 00:01:55.310 01:57:19 -- pm/common@21 -- $ /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autobuild.sh.1734227839 00:01:55.310 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734227839_collect-cpu-load.pm.log 00:01:55.310 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autobuild.sh.1734227839_collect-vmstat.pm.log 00:01:56.253 01:57:20 -- common/autobuild_common.sh@512 -- $ trap stop_monitor_resources EXIT 00:01:56.253 01:57:20 -- spdk/autobuild.sh@11 -- $ SPDK_TEST_AUTOBUILD= 00:01:56.253 01:57:20 -- spdk/autobuild.sh@12 -- $ umask 022 00:01:56.253 01:57:20 -- spdk/autobuild.sh@13 -- $ cd /home/vagrant/spdk_repo/spdk 00:01:56.253 01:57:20 -- spdk/autobuild.sh@16 -- $ date -u 00:01:56.253 Sun Dec 15 01:57:20 AM UTC 2024 00:01:56.253 01:57:20 -- spdk/autobuild.sh@17 -- $ git describe --tags 00:01:56.253 v25.01-rc1-2-ge01cb43b8 00:01:56.253 01:57:20 -- spdk/autobuild.sh@19 -- $ '[' 1 -eq 1 ']' 00:01:56.253 01:57:20 -- spdk/autobuild.sh@20 -- $ run_test asan echo 'using asan' 00:01:56.253 01:57:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.253 01:57:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.253 01:57:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.253 ************************************ 00:01:56.253 START TEST asan 00:01:56.253 ************************************ 00:01:56.253 using asan 00:01:56.253 01:57:20 asan -- common/autotest_common.sh@1129 -- $ echo 'using asan' 00:01:56.253 00:01:56.253 real 0m0.000s 00:01:56.253 user 0m0.000s 00:01:56.253 sys 0m0.000s 00:01:56.253 01:57:20 asan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:56.253 ************************************ 00:01:56.253 END TEST asan 00:01:56.253 ************************************ 00:01:56.253 01:57:20 asan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.253 01:57:20 -- spdk/autobuild.sh@23 -- $ '[' 1 -eq 1 ']' 00:01:56.253 01:57:20 -- spdk/autobuild.sh@24 -- $ run_test ubsan echo 'using ubsan' 00:01:56.253 01:57:20 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:01:56.253 01:57:20 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:01:56.253 01:57:20 -- common/autotest_common.sh@10 -- $ set +x 00:01:56.253 ************************************ 00:01:56.253 START TEST ubsan 00:01:56.253 ************************************ 00:01:56.253 using ubsan 00:01:56.253 ************************************ 00:01:56.253 END TEST ubsan 00:01:56.253 ************************************ 00:01:56.253 01:57:20 ubsan -- common/autotest_common.sh@1129 -- $ echo 'using ubsan' 00:01:56.253 00:01:56.253 real 0m0.000s 00:01:56.253 user 0m0.000s 00:01:56.253 sys 0m0.000s 00:01:56.253 01:57:20 ubsan -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:01:56.253 01:57:20 ubsan -- common/autotest_common.sh@10 -- $ set +x 00:01:56.253 01:57:21 -- spdk/autobuild.sh@27 -- $ '[' -n '' ']' 00:01:56.253 01:57:21 -- spdk/autobuild.sh@31 -- $ case "$SPDK_TEST_AUTOBUILD" in 00:01:56.253 01:57:21 -- spdk/autobuild.sh@47 -- $ [[ 0 -eq 1 ]] 00:01:56.253 01:57:21 -- spdk/autobuild.sh@51 -- $ [[ 0 -eq 1 ]] 00:01:56.253 01:57:21 -- spdk/autobuild.sh@55 -- $ [[ -n '' ]] 00:01:56.253 01:57:21 -- spdk/autobuild.sh@57 -- $ [[ 0 -eq 1 ]] 00:01:56.253 01:57:21 -- spdk/autobuild.sh@59 -- $ [[ 0 -eq 1 ]] 00:01:56.253 01:57:21 -- spdk/autobuild.sh@62 -- $ [[ 0 -eq 1 ]] 00:01:56.253 01:57:21 -- spdk/autobuild.sh@67 -- $ /home/vagrant/spdk_repo/spdk/configure --enable-debug --enable-werror --with-rdma --with-idxd --with-fio=/usr/src/fio --with-iscsi-initiator --disable-unit-tests --enable-ubsan --enable-asan --enable-coverage --with-ublk --with-xnvme --with-shared 00:01:56.514 Using default SPDK env in /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:01:56.514 Using default DPDK in /home/vagrant/spdk_repo/spdk/dpdk/build 00:01:56.775 Using 'verbs' RDMA provider 00:02:09.953 Configuring ISA-L (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal.log)...done. 00:02:19.961 Configuring ISA-L-crypto (logfile: /home/vagrant/spdk_repo/spdk/.spdk-isal-crypto.log)...done. 00:02:19.961 Creating mk/config.mk...done. 00:02:19.961 Creating mk/cc.flags.mk...done. 00:02:19.961 Type 'make' to build. 00:02:19.961 01:57:44 -- spdk/autobuild.sh@70 -- $ run_test make make -j10 00:02:19.961 01:57:44 -- common/autotest_common.sh@1105 -- $ '[' 3 -le 1 ']' 00:02:19.961 01:57:44 -- common/autotest_common.sh@1111 -- $ xtrace_disable 00:02:19.961 01:57:44 -- common/autotest_common.sh@10 -- $ set +x 00:02:19.961 ************************************ 00:02:19.961 START TEST make 00:02:19.961 ************************************ 00:02:19.961 01:57:44 make -- common/autotest_common.sh@1129 -- $ make -j10 00:02:19.961 (cd /home/vagrant/spdk_repo/spdk/xnvme && \ 00:02:19.961 export PKG_CONFIG_PATH=$PKG_CONFIG_PATH:/usr/lib/pkgconfig:/usr/lib64/pkgconfig && \ 00:02:19.961 meson setup builddir \ 00:02:19.961 -Dwith-libaio=enabled \ 00:02:19.961 -Dwith-liburing=enabled \ 00:02:19.961 -Dwith-libvfn=disabled \ 00:02:19.961 -Dwith-spdk=disabled \ 00:02:19.961 -Dexamples=false \ 00:02:19.961 -Dtests=false \ 00:02:19.961 -Dtools=false && \ 00:02:19.961 meson compile -C builddir && \ 00:02:19.961 cd -) 00:02:21.894 The Meson build system 00:02:21.894 Version: 1.5.0 00:02:21.894 Source dir: /home/vagrant/spdk_repo/spdk/xnvme 00:02:21.894 Build dir: /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:21.894 Build type: native build 00:02:21.894 Project name: xnvme 00:02:21.894 Project version: 0.7.5 00:02:21.894 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:21.894 C linker for the host machine: cc ld.bfd 2.40-14 00:02:21.894 Host machine cpu family: x86_64 00:02:21.894 Host machine cpu: x86_64 00:02:21.894 Message: host_machine.system: linux 00:02:21.894 Compiler for C supports arguments -Wno-missing-braces: YES 00:02:21.894 Compiler for C supports arguments -Wno-cast-function-type: YES 00:02:21.894 Compiler for C supports arguments -Wno-strict-aliasing: YES 00:02:21.894 Run-time dependency threads found: YES 00:02:21.894 Has header "setupapi.h" : NO 00:02:21.894 Has header "linux/blkzoned.h" : YES 00:02:21.894 Has header "linux/blkzoned.h" : YES (cached) 00:02:21.894 Has header "libaio.h" : YES 00:02:21.894 Library aio found: YES 00:02:21.894 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:21.894 Run-time dependency liburing found: YES 2.2 00:02:21.894 Dependency libvfn skipped: feature with-libvfn disabled 00:02:21.894 Found CMake: /usr/bin/cmake (3.27.7) 00:02:21.894 Run-time dependency libisal found: NO (tried pkgconfig and cmake) 00:02:21.894 Subproject spdk : skipped: feature with-spdk disabled 00:02:21.894 Run-time dependency appleframeworks found: NO (tried framework) 00:02:21.894 Run-time dependency appleframeworks found: NO (tried framework) 00:02:21.894 Library rt found: YES 00:02:21.894 Checking for function "clock_gettime" with dependency -lrt: YES 00:02:21.894 Configuring xnvme_config.h using configuration 00:02:21.895 Configuring xnvme.spec using configuration 00:02:21.895 Run-time dependency bash-completion found: YES 2.11 00:02:21.895 Message: Bash-completions: /usr/share/bash-completion/completions 00:02:21.895 Program cp found: YES (/usr/bin/cp) 00:02:21.895 Build targets in project: 3 00:02:21.895 00:02:21.895 xnvme 0.7.5 00:02:21.895 00:02:21.895 Subprojects 00:02:21.895 spdk : NO Feature 'with-spdk' disabled 00:02:21.895 00:02:21.895 User defined options 00:02:21.895 examples : false 00:02:21.895 tests : false 00:02:21.895 tools : false 00:02:21.895 with-libaio : enabled 00:02:21.895 with-liburing: enabled 00:02:21.895 with-libvfn : disabled 00:02:21.895 with-spdk : disabled 00:02:21.895 00:02:21.895 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:22.460 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/xnvme/builddir' 00:02:22.460 [1/76] Generating toolbox/xnvme-driver-script with a custom command 00:02:22.460 [2/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd.c.o 00:02:22.460 [3/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_admin_shim.c.o 00:02:22.460 [4/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_dev.c.o 00:02:22.460 [5/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_async.c.o 00:02:22.460 [6/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_mem_posix.c.o 00:02:22.460 [7/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_fbsd_nvme.c.o 00:02:22.460 [8/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_nil.c.o 00:02:22.460 [9/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_adm.c.o 00:02:22.460 [10/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_sync_psync.c.o 00:02:22.460 [11/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux.c.o 00:02:22.460 [12/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_emu.c.o 00:02:22.460 [13/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_posix.c.o 00:02:22.460 [14/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_admin.c.o 00:02:22.460 [15/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos.c.o 00:02:22.460 [16/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_dev.c.o 00:02:22.460 [17/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_libaio.c.o 00:02:22.460 [18/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_cbi_async_thrpool.c.o 00:02:22.460 [19/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_macos_sync.c.o 00:02:22.460 [20/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_dev.c.o 00:02:22.460 [21/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_hugepage.c.o 00:02:22.718 [22/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk.c.o 00:02:22.718 [23/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be.c.o 00:02:22.718 [24/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_ucmd.c.o 00:02:22.718 [25/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_async_liburing.c.o 00:02:22.718 [26/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_admin.c.o 00:02:22.718 [27/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk.c.o 00:02:22.718 [28/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_nvme.c.o 00:02:22.718 [29/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_nosys.c.o 00:02:22.718 [30/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_admin.c.o 00:02:22.718 [31/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_dev.c.o 00:02:22.718 [32/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_linux_block.c.o 00:02:22.718 [33/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_async.c.o 00:02:22.718 [34/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_dev.c.o 00:02:22.718 [35/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_mem.c.o 00:02:22.718 [36/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_spdk_sync.c.o 00:02:22.718 [37/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_ramdisk_sync.c.o 00:02:22.718 [38/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_dev.c.o 00:02:22.718 [39/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_admin.c.o 00:02:22.718 [40/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_sync.c.o 00:02:22.718 [41/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_mem.c.o 00:02:22.718 [42/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio.c.o 00:02:22.718 [43/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_vfio_async.c.o 00:02:22.718 [44/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp.c.o 00:02:22.718 [45/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows.c.o 00:02:22.718 [46/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_ioring.c.o 00:02:22.718 [47/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_async_iocp_th.c.o 00:02:22.718 [48/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_block.c.o 00:02:22.718 [49/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_mem.c.o 00:02:22.718 [50/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_nvme.c.o 00:02:22.718 [51/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_dev.c.o 00:02:22.718 [52/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_be_windows_fs.c.o 00:02:22.718 [53/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf_entries.c.o 00:02:22.718 [54/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_file.c.o 00:02:22.718 [55/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ident.c.o 00:02:22.718 [56/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cmd.c.o 00:02:22.718 [57/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_geo.c.o 00:02:22.975 [58/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_req.c.o 00:02:22.975 [59/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_libconf.c.o 00:02:22.975 [60/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_opts.c.o 00:02:22.975 [61/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_lba.c.o 00:02:22.975 [62/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_buf.c.o 00:02:22.975 [63/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_kvs.c.o 00:02:22.975 [64/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_nvm.c.o 00:02:22.975 [65/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_queue.c.o 00:02:22.975 [66/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_ver.c.o 00:02:22.975 [67/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec_pp.c.o 00:02:22.975 [68/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_crc.c.o 00:02:22.975 [69/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_topology.c.o 00:02:22.975 [70/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_dev.c.o 00:02:22.975 [71/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_znd.c.o 00:02:23.234 [72/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_pi.c.o 00:02:23.234 [73/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_cli.c.o 00:02:23.492 [74/76] Compiling C object lib/libxnvme.so.0.7.5.p/xnvme_spec.c.o 00:02:23.492 [75/76] Linking static target lib/libxnvme.a 00:02:23.492 [76/76] Linking target lib/libxnvme.so.0.7.5 00:02:23.492 INFO: autodetecting backend as ninja 00:02:23.492 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/xnvme/builddir 00:02:23.492 /home/vagrant/spdk_repo/spdk/xnvmebuild 00:02:30.096 The Meson build system 00:02:30.096 Version: 1.5.0 00:02:30.096 Source dir: /home/vagrant/spdk_repo/spdk/dpdk 00:02:30.096 Build dir: /home/vagrant/spdk_repo/spdk/dpdk/build-tmp 00:02:30.096 Build type: native build 00:02:30.096 Program cat found: YES (/usr/bin/cat) 00:02:30.096 Project name: DPDK 00:02:30.096 Project version: 24.03.0 00:02:30.096 C compiler for the host machine: cc (gcc 13.3.1 "cc (GCC) 13.3.1 20240522 (Red Hat 13.3.1-1)") 00:02:30.096 C linker for the host machine: cc ld.bfd 2.40-14 00:02:30.096 Host machine cpu family: x86_64 00:02:30.096 Host machine cpu: x86_64 00:02:30.096 Message: ## Building in Developer Mode ## 00:02:30.096 Program pkg-config found: YES (/usr/bin/pkg-config) 00:02:30.096 Program check-symbols.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/check-symbols.sh) 00:02:30.096 Program options-ibverbs-static.sh found: YES (/home/vagrant/spdk_repo/spdk/dpdk/buildtools/options-ibverbs-static.sh) 00:02:30.096 Program python3 found: YES (/usr/bin/python3) 00:02:30.096 Program cat found: YES (/usr/bin/cat) 00:02:30.096 Compiler for C supports arguments -march=native: YES 00:02:30.096 Checking for size of "void *" : 8 00:02:30.096 Checking for size of "void *" : 8 (cached) 00:02:30.096 Compiler for C supports link arguments -Wl,--undefined-version: YES 00:02:30.096 Library m found: YES 00:02:30.096 Library numa found: YES 00:02:30.096 Has header "numaif.h" : YES 00:02:30.096 Library fdt found: NO 00:02:30.096 Library execinfo found: NO 00:02:30.096 Has header "execinfo.h" : YES 00:02:30.096 Found pkg-config: YES (/usr/bin/pkg-config) 1.9.5 00:02:30.096 Run-time dependency libarchive found: NO (tried pkgconfig) 00:02:30.096 Run-time dependency libbsd found: NO (tried pkgconfig) 00:02:30.096 Run-time dependency jansson found: NO (tried pkgconfig) 00:02:30.096 Run-time dependency openssl found: YES 3.1.1 00:02:30.096 Run-time dependency libpcap found: YES 1.10.4 00:02:30.096 Has header "pcap.h" with dependency libpcap: YES 00:02:30.096 Compiler for C supports arguments -Wcast-qual: YES 00:02:30.096 Compiler for C supports arguments -Wdeprecated: YES 00:02:30.096 Compiler for C supports arguments -Wformat: YES 00:02:30.096 Compiler for C supports arguments -Wformat-nonliteral: NO 00:02:30.096 Compiler for C supports arguments -Wformat-security: NO 00:02:30.096 Compiler for C supports arguments -Wmissing-declarations: YES 00:02:30.096 Compiler for C supports arguments -Wmissing-prototypes: YES 00:02:30.096 Compiler for C supports arguments -Wnested-externs: YES 00:02:30.096 Compiler for C supports arguments -Wold-style-definition: YES 00:02:30.096 Compiler for C supports arguments -Wpointer-arith: YES 00:02:30.096 Compiler for C supports arguments -Wsign-compare: YES 00:02:30.096 Compiler for C supports arguments -Wstrict-prototypes: YES 00:02:30.096 Compiler for C supports arguments -Wundef: YES 00:02:30.096 Compiler for C supports arguments -Wwrite-strings: YES 00:02:30.096 Compiler for C supports arguments -Wno-address-of-packed-member: YES 00:02:30.096 Compiler for C supports arguments -Wno-packed-not-aligned: YES 00:02:30.096 Compiler for C supports arguments -Wno-missing-field-initializers: YES 00:02:30.096 Compiler for C supports arguments -Wno-zero-length-bounds: YES 00:02:30.096 Program objdump found: YES (/usr/bin/objdump) 00:02:30.096 Compiler for C supports arguments -mavx512f: YES 00:02:30.096 Checking if "AVX512 checking" compiles: YES 00:02:30.096 Fetching value of define "__SSE4_2__" : 1 00:02:30.096 Fetching value of define "__AES__" : 1 00:02:30.096 Fetching value of define "__AVX__" : 1 00:02:30.096 Fetching value of define "__AVX2__" : 1 00:02:30.096 Fetching value of define "__AVX512BW__" : 1 00:02:30.096 Fetching value of define "__AVX512CD__" : 1 00:02:30.096 Fetching value of define "__AVX512DQ__" : 1 00:02:30.096 Fetching value of define "__AVX512F__" : 1 00:02:30.096 Fetching value of define "__AVX512VL__" : 1 00:02:30.096 Fetching value of define "__PCLMUL__" : 1 00:02:30.096 Fetching value of define "__RDRND__" : 1 00:02:30.096 Fetching value of define "__RDSEED__" : 1 00:02:30.096 Fetching value of define "__VPCLMULQDQ__" : 1 00:02:30.096 Fetching value of define "__znver1__" : (undefined) 00:02:30.096 Fetching value of define "__znver2__" : (undefined) 00:02:30.096 Fetching value of define "__znver3__" : (undefined) 00:02:30.096 Fetching value of define "__znver4__" : (undefined) 00:02:30.097 Library asan found: YES 00:02:30.097 Compiler for C supports arguments -Wno-format-truncation: YES 00:02:30.097 Message: lib/log: Defining dependency "log" 00:02:30.097 Message: lib/kvargs: Defining dependency "kvargs" 00:02:30.097 Message: lib/telemetry: Defining dependency "telemetry" 00:02:30.097 Library rt found: YES 00:02:30.097 Checking for function "getentropy" : NO 00:02:30.097 Message: lib/eal: Defining dependency "eal" 00:02:30.097 Message: lib/ring: Defining dependency "ring" 00:02:30.097 Message: lib/rcu: Defining dependency "rcu" 00:02:30.097 Message: lib/mempool: Defining dependency "mempool" 00:02:30.097 Message: lib/mbuf: Defining dependency "mbuf" 00:02:30.097 Fetching value of define "__PCLMUL__" : 1 (cached) 00:02:30.097 Fetching value of define "__AVX512F__" : 1 (cached) 00:02:30.097 Fetching value of define "__AVX512BW__" : 1 (cached) 00:02:30.097 Fetching value of define "__AVX512DQ__" : 1 (cached) 00:02:30.097 Fetching value of define "__AVX512VL__" : 1 (cached) 00:02:30.097 Fetching value of define "__VPCLMULQDQ__" : 1 (cached) 00:02:30.097 Compiler for C supports arguments -mpclmul: YES 00:02:30.097 Compiler for C supports arguments -maes: YES 00:02:30.097 Compiler for C supports arguments -mavx512f: YES (cached) 00:02:30.097 Compiler for C supports arguments -mavx512bw: YES 00:02:30.097 Compiler for C supports arguments -mavx512dq: YES 00:02:30.097 Compiler for C supports arguments -mavx512vl: YES 00:02:30.097 Compiler for C supports arguments -mvpclmulqdq: YES 00:02:30.097 Compiler for C supports arguments -mavx2: YES 00:02:30.097 Compiler for C supports arguments -mavx: YES 00:02:30.097 Message: lib/net: Defining dependency "net" 00:02:30.097 Message: lib/meter: Defining dependency "meter" 00:02:30.097 Message: lib/ethdev: Defining dependency "ethdev" 00:02:30.097 Message: lib/pci: Defining dependency "pci" 00:02:30.097 Message: lib/cmdline: Defining dependency "cmdline" 00:02:30.097 Message: lib/hash: Defining dependency "hash" 00:02:30.097 Message: lib/timer: Defining dependency "timer" 00:02:30.097 Message: lib/compressdev: Defining dependency "compressdev" 00:02:30.097 Message: lib/cryptodev: Defining dependency "cryptodev" 00:02:30.097 Message: lib/dmadev: Defining dependency "dmadev" 00:02:30.097 Compiler for C supports arguments -Wno-cast-qual: YES 00:02:30.097 Message: lib/power: Defining dependency "power" 00:02:30.097 Message: lib/reorder: Defining dependency "reorder" 00:02:30.097 Message: lib/security: Defining dependency "security" 00:02:30.097 Has header "linux/userfaultfd.h" : YES 00:02:30.097 Has header "linux/vduse.h" : YES 00:02:30.097 Message: lib/vhost: Defining dependency "vhost" 00:02:30.097 Compiler for C supports arguments -Wno-format-truncation: YES (cached) 00:02:30.097 Message: drivers/bus/pci: Defining dependency "bus_pci" 00:02:30.097 Message: drivers/bus/vdev: Defining dependency "bus_vdev" 00:02:30.097 Message: drivers/mempool/ring: Defining dependency "mempool_ring" 00:02:30.097 Message: Disabling raw/* drivers: missing internal dependency "rawdev" 00:02:30.097 Message: Disabling regex/* drivers: missing internal dependency "regexdev" 00:02:30.097 Message: Disabling ml/* drivers: missing internal dependency "mldev" 00:02:30.097 Message: Disabling event/* drivers: missing internal dependency "eventdev" 00:02:30.097 Message: Disabling baseband/* drivers: missing internal dependency "bbdev" 00:02:30.097 Message: Disabling gpu/* drivers: missing internal dependency "gpudev" 00:02:30.097 Program doxygen found: YES (/usr/local/bin/doxygen) 00:02:30.097 Configuring doxy-api-html.conf using configuration 00:02:30.097 Configuring doxy-api-man.conf using configuration 00:02:30.097 Program mandb found: YES (/usr/bin/mandb) 00:02:30.097 Program sphinx-build found: NO 00:02:30.097 Configuring rte_build_config.h using configuration 00:02:30.097 Message: 00:02:30.097 ================= 00:02:30.097 Applications Enabled 00:02:30.097 ================= 00:02:30.097 00:02:30.097 apps: 00:02:30.097 00:02:30.097 00:02:30.097 Message: 00:02:30.097 ================= 00:02:30.097 Libraries Enabled 00:02:30.097 ================= 00:02:30.097 00:02:30.097 libs: 00:02:30.097 log, kvargs, telemetry, eal, ring, rcu, mempool, mbuf, 00:02:30.097 net, meter, ethdev, pci, cmdline, hash, timer, compressdev, 00:02:30.097 cryptodev, dmadev, power, reorder, security, vhost, 00:02:30.097 00:02:30.097 Message: 00:02:30.097 =============== 00:02:30.097 Drivers Enabled 00:02:30.097 =============== 00:02:30.097 00:02:30.097 common: 00:02:30.097 00:02:30.097 bus: 00:02:30.097 pci, vdev, 00:02:30.097 mempool: 00:02:30.097 ring, 00:02:30.097 dma: 00:02:30.097 00:02:30.097 net: 00:02:30.097 00:02:30.097 crypto: 00:02:30.097 00:02:30.097 compress: 00:02:30.097 00:02:30.097 vdpa: 00:02:30.097 00:02:30.097 00:02:30.097 Message: 00:02:30.097 ================= 00:02:30.097 Content Skipped 00:02:30.097 ================= 00:02:30.097 00:02:30.097 apps: 00:02:30.097 dumpcap: explicitly disabled via build config 00:02:30.097 graph: explicitly disabled via build config 00:02:30.097 pdump: explicitly disabled via build config 00:02:30.097 proc-info: explicitly disabled via build config 00:02:30.097 test-acl: explicitly disabled via build config 00:02:30.097 test-bbdev: explicitly disabled via build config 00:02:30.097 test-cmdline: explicitly disabled via build config 00:02:30.097 test-compress-perf: explicitly disabled via build config 00:02:30.097 test-crypto-perf: explicitly disabled via build config 00:02:30.097 test-dma-perf: explicitly disabled via build config 00:02:30.097 test-eventdev: explicitly disabled via build config 00:02:30.097 test-fib: explicitly disabled via build config 00:02:30.097 test-flow-perf: explicitly disabled via build config 00:02:30.097 test-gpudev: explicitly disabled via build config 00:02:30.097 test-mldev: explicitly disabled via build config 00:02:30.097 test-pipeline: explicitly disabled via build config 00:02:30.097 test-pmd: explicitly disabled via build config 00:02:30.097 test-regex: explicitly disabled via build config 00:02:30.097 test-sad: explicitly disabled via build config 00:02:30.097 test-security-perf: explicitly disabled via build config 00:02:30.097 00:02:30.097 libs: 00:02:30.097 argparse: explicitly disabled via build config 00:02:30.097 metrics: explicitly disabled via build config 00:02:30.097 acl: explicitly disabled via build config 00:02:30.097 bbdev: explicitly disabled via build config 00:02:30.097 bitratestats: explicitly disabled via build config 00:02:30.097 bpf: explicitly disabled via build config 00:02:30.097 cfgfile: explicitly disabled via build config 00:02:30.097 distributor: explicitly disabled via build config 00:02:30.097 efd: explicitly disabled via build config 00:02:30.097 eventdev: explicitly disabled via build config 00:02:30.097 dispatcher: explicitly disabled via build config 00:02:30.097 gpudev: explicitly disabled via build config 00:02:30.097 gro: explicitly disabled via build config 00:02:30.097 gso: explicitly disabled via build config 00:02:30.097 ip_frag: explicitly disabled via build config 00:02:30.097 jobstats: explicitly disabled via build config 00:02:30.097 latencystats: explicitly disabled via build config 00:02:30.097 lpm: explicitly disabled via build config 00:02:30.097 member: explicitly disabled via build config 00:02:30.097 pcapng: explicitly disabled via build config 00:02:30.097 rawdev: explicitly disabled via build config 00:02:30.097 regexdev: explicitly disabled via build config 00:02:30.097 mldev: explicitly disabled via build config 00:02:30.097 rib: explicitly disabled via build config 00:02:30.097 sched: explicitly disabled via build config 00:02:30.097 stack: explicitly disabled via build config 00:02:30.097 ipsec: explicitly disabled via build config 00:02:30.097 pdcp: explicitly disabled via build config 00:02:30.097 fib: explicitly disabled via build config 00:02:30.097 port: explicitly disabled via build config 00:02:30.097 pdump: explicitly disabled via build config 00:02:30.097 table: explicitly disabled via build config 00:02:30.097 pipeline: explicitly disabled via build config 00:02:30.097 graph: explicitly disabled via build config 00:02:30.097 node: explicitly disabled via build config 00:02:30.097 00:02:30.097 drivers: 00:02:30.097 common/cpt: not in enabled drivers build config 00:02:30.097 common/dpaax: not in enabled drivers build config 00:02:30.097 common/iavf: not in enabled drivers build config 00:02:30.097 common/idpf: not in enabled drivers build config 00:02:30.097 common/ionic: not in enabled drivers build config 00:02:30.097 common/mvep: not in enabled drivers build config 00:02:30.097 common/octeontx: not in enabled drivers build config 00:02:30.097 bus/auxiliary: not in enabled drivers build config 00:02:30.097 bus/cdx: not in enabled drivers build config 00:02:30.097 bus/dpaa: not in enabled drivers build config 00:02:30.097 bus/fslmc: not in enabled drivers build config 00:02:30.097 bus/ifpga: not in enabled drivers build config 00:02:30.097 bus/platform: not in enabled drivers build config 00:02:30.097 bus/uacce: not in enabled drivers build config 00:02:30.097 bus/vmbus: not in enabled drivers build config 00:02:30.097 common/cnxk: not in enabled drivers build config 00:02:30.097 common/mlx5: not in enabled drivers build config 00:02:30.097 common/nfp: not in enabled drivers build config 00:02:30.097 common/nitrox: not in enabled drivers build config 00:02:30.097 common/qat: not in enabled drivers build config 00:02:30.097 common/sfc_efx: not in enabled drivers build config 00:02:30.097 mempool/bucket: not in enabled drivers build config 00:02:30.097 mempool/cnxk: not in enabled drivers build config 00:02:30.097 mempool/dpaa: not in enabled drivers build config 00:02:30.097 mempool/dpaa2: not in enabled drivers build config 00:02:30.097 mempool/octeontx: not in enabled drivers build config 00:02:30.097 mempool/stack: not in enabled drivers build config 00:02:30.097 dma/cnxk: not in enabled drivers build config 00:02:30.097 dma/dpaa: not in enabled drivers build config 00:02:30.097 dma/dpaa2: not in enabled drivers build config 00:02:30.097 dma/hisilicon: not in enabled drivers build config 00:02:30.097 dma/idxd: not in enabled drivers build config 00:02:30.097 dma/ioat: not in enabled drivers build config 00:02:30.097 dma/skeleton: not in enabled drivers build config 00:02:30.097 net/af_packet: not in enabled drivers build config 00:02:30.097 net/af_xdp: not in enabled drivers build config 00:02:30.097 net/ark: not in enabled drivers build config 00:02:30.097 net/atlantic: not in enabled drivers build config 00:02:30.097 net/avp: not in enabled drivers build config 00:02:30.097 net/axgbe: not in enabled drivers build config 00:02:30.097 net/bnx2x: not in enabled drivers build config 00:02:30.097 net/bnxt: not in enabled drivers build config 00:02:30.097 net/bonding: not in enabled drivers build config 00:02:30.097 net/cnxk: not in enabled drivers build config 00:02:30.097 net/cpfl: not in enabled drivers build config 00:02:30.097 net/cxgbe: not in enabled drivers build config 00:02:30.098 net/dpaa: not in enabled drivers build config 00:02:30.098 net/dpaa2: not in enabled drivers build config 00:02:30.098 net/e1000: not in enabled drivers build config 00:02:30.098 net/ena: not in enabled drivers build config 00:02:30.098 net/enetc: not in enabled drivers build config 00:02:30.098 net/enetfec: not in enabled drivers build config 00:02:30.098 net/enic: not in enabled drivers build config 00:02:30.098 net/failsafe: not in enabled drivers build config 00:02:30.098 net/fm10k: not in enabled drivers build config 00:02:30.098 net/gve: not in enabled drivers build config 00:02:30.098 net/hinic: not in enabled drivers build config 00:02:30.098 net/hns3: not in enabled drivers build config 00:02:30.098 net/i40e: not in enabled drivers build config 00:02:30.098 net/iavf: not in enabled drivers build config 00:02:30.098 net/ice: not in enabled drivers build config 00:02:30.098 net/idpf: not in enabled drivers build config 00:02:30.098 net/igc: not in enabled drivers build config 00:02:30.098 net/ionic: not in enabled drivers build config 00:02:30.098 net/ipn3ke: not in enabled drivers build config 00:02:30.098 net/ixgbe: not in enabled drivers build config 00:02:30.098 net/mana: not in enabled drivers build config 00:02:30.098 net/memif: not in enabled drivers build config 00:02:30.098 net/mlx4: not in enabled drivers build config 00:02:30.098 net/mlx5: not in enabled drivers build config 00:02:30.098 net/mvneta: not in enabled drivers build config 00:02:30.098 net/mvpp2: not in enabled drivers build config 00:02:30.098 net/netvsc: not in enabled drivers build config 00:02:30.098 net/nfb: not in enabled drivers build config 00:02:30.098 net/nfp: not in enabled drivers build config 00:02:30.098 net/ngbe: not in enabled drivers build config 00:02:30.098 net/null: not in enabled drivers build config 00:02:30.098 net/octeontx: not in enabled drivers build config 00:02:30.098 net/octeon_ep: not in enabled drivers build config 00:02:30.098 net/pcap: not in enabled drivers build config 00:02:30.098 net/pfe: not in enabled drivers build config 00:02:30.098 net/qede: not in enabled drivers build config 00:02:30.098 net/ring: not in enabled drivers build config 00:02:30.098 net/sfc: not in enabled drivers build config 00:02:30.098 net/softnic: not in enabled drivers build config 00:02:30.098 net/tap: not in enabled drivers build config 00:02:30.098 net/thunderx: not in enabled drivers build config 00:02:30.098 net/txgbe: not in enabled drivers build config 00:02:30.098 net/vdev_netvsc: not in enabled drivers build config 00:02:30.098 net/vhost: not in enabled drivers build config 00:02:30.098 net/virtio: not in enabled drivers build config 00:02:30.098 net/vmxnet3: not in enabled drivers build config 00:02:30.098 raw/*: missing internal dependency, "rawdev" 00:02:30.098 crypto/armv8: not in enabled drivers build config 00:02:30.098 crypto/bcmfs: not in enabled drivers build config 00:02:30.098 crypto/caam_jr: not in enabled drivers build config 00:02:30.098 crypto/ccp: not in enabled drivers build config 00:02:30.098 crypto/cnxk: not in enabled drivers build config 00:02:30.098 crypto/dpaa_sec: not in enabled drivers build config 00:02:30.098 crypto/dpaa2_sec: not in enabled drivers build config 00:02:30.098 crypto/ipsec_mb: not in enabled drivers build config 00:02:30.098 crypto/mlx5: not in enabled drivers build config 00:02:30.098 crypto/mvsam: not in enabled drivers build config 00:02:30.098 crypto/nitrox: not in enabled drivers build config 00:02:30.098 crypto/null: not in enabled drivers build config 00:02:30.098 crypto/octeontx: not in enabled drivers build config 00:02:30.098 crypto/openssl: not in enabled drivers build config 00:02:30.098 crypto/scheduler: not in enabled drivers build config 00:02:30.098 crypto/uadk: not in enabled drivers build config 00:02:30.098 crypto/virtio: not in enabled drivers build config 00:02:30.098 compress/isal: not in enabled drivers build config 00:02:30.098 compress/mlx5: not in enabled drivers build config 00:02:30.098 compress/nitrox: not in enabled drivers build config 00:02:30.098 compress/octeontx: not in enabled drivers build config 00:02:30.098 compress/zlib: not in enabled drivers build config 00:02:30.098 regex/*: missing internal dependency, "regexdev" 00:02:30.098 ml/*: missing internal dependency, "mldev" 00:02:30.098 vdpa/ifc: not in enabled drivers build config 00:02:30.098 vdpa/mlx5: not in enabled drivers build config 00:02:30.098 vdpa/nfp: not in enabled drivers build config 00:02:30.098 vdpa/sfc: not in enabled drivers build config 00:02:30.098 event/*: missing internal dependency, "eventdev" 00:02:30.098 baseband/*: missing internal dependency, "bbdev" 00:02:30.098 gpu/*: missing internal dependency, "gpudev" 00:02:30.098 00:02:30.098 00:02:30.098 Build targets in project: 84 00:02:30.098 00:02:30.098 DPDK 24.03.0 00:02:30.098 00:02:30.098 User defined options 00:02:30.098 buildtype : debug 00:02:30.098 default_library : shared 00:02:30.098 libdir : lib 00:02:30.098 prefix : /home/vagrant/spdk_repo/spdk/dpdk/build 00:02:30.098 b_sanitize : address 00:02:30.098 c_args : -Wno-stringop-overflow -fcommon -Wno-stringop-overread -Wno-array-bounds -fPIC -Werror 00:02:30.098 c_link_args : 00:02:30.098 cpu_instruction_set: native 00:02:30.098 disable_apps : dumpcap,graph,pdump,proc-info,test-acl,test-bbdev,test-cmdline,test-compress-perf,test-crypto-perf,test-dma-perf,test-eventdev,test-fib,test-flow-perf,test-gpudev,test-mldev,test-pipeline,test-pmd,test-regex,test-sad,test-security-perf,test 00:02:30.098 disable_libs : acl,argparse,bbdev,bitratestats,bpf,cfgfile,dispatcher,distributor,efd,eventdev,fib,gpudev,graph,gro,gso,ip_frag,ipsec,jobstats,latencystats,lpm,member,metrics,mldev,node,pcapng,pdcp,pdump,pipeline,port,rawdev,regexdev,rib,sched,stack,table 00:02:30.098 enable_docs : false 00:02:30.098 enable_drivers : bus,bus/pci,bus/vdev,mempool/ring,power/acpi,power/amd_pstate,power/cppc,power/intel_pstate,power/intel_uncore,power/kvm_vm 00:02:30.098 enable_kmods : false 00:02:30.098 max_lcores : 128 00:02:30.098 tests : false 00:02:30.098 00:02:30.098 Found ninja-1.11.1.git.kitware.jobserver-1 at /usr/local/bin/ninja 00:02:30.098 ninja: Entering directory `/home/vagrant/spdk_repo/spdk/dpdk/build-tmp' 00:02:30.098 [1/267] Compiling C object lib/librte_log.a.p/log_log_linux.c.o 00:02:30.098 [2/267] Compiling C object lib/librte_kvargs.a.p/kvargs_rte_kvargs.c.o 00:02:30.098 [3/267] Linking static target lib/librte_kvargs.a 00:02:30.098 [4/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_data.c.o 00:02:30.098 [5/267] Compiling C object lib/librte_log.a.p/log_log.c.o 00:02:30.098 [6/267] Linking static target lib/librte_log.a 00:02:30.365 [7/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dev.c.o 00:02:30.365 [8/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_class.c.o 00:02:30.365 [9/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_config.c.o 00:02:30.365 [10/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry_legacy.c.o 00:02:30.365 [11/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_errno.c.o 00:02:30.365 [12/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_bus.c.o 00:02:30.365 [13/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hexdump.c.o 00:02:30.365 [14/267] Compiling C object lib/librte_telemetry.a.p/telemetry_telemetry.c.o 00:02:30.365 [15/267] Linking static target lib/librte_telemetry.a 00:02:30.365 [16/267] Generating lib/kvargs.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.622 [17/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_devargs.c.o 00:02:30.622 [18/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_debug.c.o 00:02:30.622 [19/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memalloc.c.o 00:02:30.622 [20/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_mcfg.c.o 00:02:30.880 [21/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_launch.c.o 00:02:30.880 [22/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_lcore.c.o 00:02:30.880 [23/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_string_fns.c.o 00:02:30.880 [24/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_interrupts.c.o 00:02:30.880 [25/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memzone.c.o 00:02:30.880 [26/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_fbarray.c.o 00:02:30.880 [27/267] Generating lib/log.sym_chk with a custom command (wrapped by meson to capture output) 00:02:30.880 [28/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_memory.c.o 00:02:30.880 [29/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_uuid.c.o 00:02:30.880 [30/267] Linking target lib/librte_log.so.24.1 00:02:31.138 [31/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_tailqs.c.o 00:02:31.139 [32/267] Generating lib/telemetry.sym_chk with a custom command (wrapped by meson to capture output) 00:02:31.139 [33/267] Generating symbol file lib/librte_log.so.24.1.p/librte_log.so.24.1.symbols 00:02:31.139 [34/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_reciprocal.c.o 00:02:31.139 [35/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_timer.c.o 00:02:31.139 [36/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_points.c.o 00:02:31.139 [37/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_thread.c.o 00:02:31.139 [38/267] Linking target lib/librte_kvargs.so.24.1 00:02:31.139 [39/267] Linking target lib/librte_telemetry.so.24.1 00:02:31.139 [40/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_version.c.o 00:02:31.398 [41/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_cpuflags.c.o 00:02:31.398 [42/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_options.c.o 00:02:31.398 [43/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_hypervisor.c.o 00:02:31.398 [44/267] Generating symbol file lib/librte_kvargs.so.24.1.p/librte_kvargs.so.24.1.symbols 00:02:31.398 [45/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_malloc.c.o 00:02:31.398 [46/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_random.c.o 00:02:31.398 [47/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_elem.c.o 00:02:31.398 [48/267] Generating symbol file lib/librte_telemetry.so.24.1.p/librte_telemetry.so.24.1.symbols 00:02:31.656 [49/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_heap.c.o 00:02:31.656 [50/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_debug.c.o 00:02:31.656 [51/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_dynmem.c.o 00:02:31.656 [52/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_utils.c.o 00:02:31.656 [53/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace_ctf.c.o 00:02:31.656 [54/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_service.c.o 00:02:31.656 [55/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_proc.c.o 00:02:31.656 [56/267] Compiling C object lib/librte_eal.a.p/eal_common_eal_common_trace.c.o 00:02:31.915 [57/267] Compiling C object lib/librte_eal.a.p/eal_common_rte_keepalive.c.o 00:02:31.915 [58/267] Compiling C object lib/librte_eal.a.p/eal_common_malloc_mp.c.o 00:02:31.915 [59/267] Compiling C object lib/librte_eal.a.p/eal_common_hotplug_mp.c.o 00:02:31.915 [60/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_cpuflags.c.o 00:02:31.915 [61/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_file.c.o 00:02:32.174 [62/267] Compiling C object lib/librte_eal.a.p/eal_unix_rte_thread.c.o 00:02:32.174 [63/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_memory.c.o 00:02:32.174 [64/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_filesystem.c.o 00:02:32.174 [65/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_timer.c.o 00:02:32.174 [66/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_firmware.c.o 00:02:32.174 [67/267] Compiling C object lib/librte_eal.a.p/eal_unix_eal_unix_thread.c.o 00:02:32.174 [68/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_alarm.c.o 00:02:32.432 [69/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_dev.c.o 00:02:32.432 [70/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal.c.o 00:02:32.432 [71/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_hugepage_info.c.o 00:02:32.432 [72/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cpuflags.c.o 00:02:32.432 [73/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_lcore.c.o 00:02:32.432 [74/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_thread.c.o 00:02:32.432 [75/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_hypervisor.c.o 00:02:32.432 [76/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_spinlock.c.o 00:02:32.432 [77/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_timer.c.o 00:02:32.690 [78/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memalloc.c.o 00:02:32.690 [79/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_power_intrinsics.c.o 00:02:32.690 [80/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_interrupts.c.o 00:02:32.690 [81/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio_mp_sync.c.o 00:02:32.690 [82/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_memory.c.o 00:02:32.690 [83/267] Compiling C object lib/librte_eal.a.p/eal_x86_rte_cycles.c.o 00:02:32.949 [84/267] Compiling C object lib/librte_eal.a.p/eal_linux_eal_vfio.c.o 00:02:32.949 [85/267] Linking static target lib/librte_eal.a 00:02:32.949 [86/267] Compiling C object lib/librte_ring.a.p/ring_rte_ring.c.o 00:02:32.949 [87/267] Linking static target lib/librte_ring.a 00:02:32.949 [88/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops.c.o 00:02:33.207 [89/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_ptype.c.o 00:02:33.207 [90/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool.c.o 00:02:33.207 [91/267] Compiling C object lib/librte_mempool.a.p/mempool_mempool_trace_points.c.o 00:02:33.207 [92/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_pool_ops.c.o 00:02:33.207 [93/267] Compiling C object lib/librte_mempool.a.p/mempool_rte_mempool_ops_default.c.o 00:02:33.207 [94/267] Linking static target lib/librte_mempool.a 00:02:33.207 [95/267] Compiling C object lib/librte_rcu.a.p/rcu_rte_rcu_qsbr.c.o 00:02:33.207 [96/267] Linking static target lib/librte_rcu.a 00:02:33.465 [97/267] Generating lib/ring.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.465 [98/267] Compiling C object lib/librte_net.a.p/net_rte_net_crc.c.o 00:02:33.465 [99/267] Compiling C object lib/librte_net.a.p/net_rte_net.c.o 00:02:33.465 [100/267] Compiling C object lib/librte_net.a.p/net_rte_ether.c.o 00:02:33.466 [101/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf_dyn.c.o 00:02:33.466 [102/267] Compiling C object lib/librte_net.a.p/net_rte_arp.c.o 00:02:33.724 [103/267] Compiling C object lib/librte_net.a.p/net_net_crc_sse.c.o 00:02:33.724 [104/267] Generating lib/rcu.sym_chk with a custom command (wrapped by meson to capture output) 00:02:33.724 [105/267] Compiling C object lib/librte_mbuf.a.p/mbuf_rte_mbuf.c.o 00:02:33.724 [106/267] Linking static target lib/librte_mbuf.a 00:02:33.724 [107/267] Compiling C object lib/librte_meter.a.p/meter_rte_meter.c.o 00:02:33.724 [108/267] Linking static target lib/librte_meter.a 00:02:33.724 [109/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_private.c.o 00:02:33.724 [110/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_driver.c.o 00:02:33.982 [111/267] Compiling C object lib/librte_net.a.p/net_net_crc_avx512.c.o 00:02:33.982 [112/267] Linking static target lib/librte_net.a 00:02:33.982 [113/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_profile.c.o 00:02:33.982 [114/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_class_eth.c.o 00:02:33.982 [115/267] Generating lib/meter.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.240 [116/267] Generating lib/mempool.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.240 [117/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_cman.c.o 00:02:34.240 [118/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_trace_points.c.o 00:02:34.240 [119/267] Generating lib/net.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.240 [120/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev_telemetry.c.o 00:02:34.497 [121/267] Generating lib/mbuf.sym_chk with a custom command (wrapped by meson to capture output) 00:02:34.497 [122/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_telemetry.c.o 00:02:34.497 [123/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8079.c.o 00:02:34.497 [124/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_common.c.o 00:02:34.497 [125/267] Compiling C object lib/librte_pci.a.p/pci_rte_pci.c.o 00:02:34.755 [126/267] Linking static target lib/librte_pci.a 00:02:34.755 [127/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8472.c.o 00:02:34.755 [128/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline.c.o 00:02:34.755 [129/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_cirbuf.c.o 00:02:34.755 [130/267] Compiling C object lib/librte_ethdev.a.p/ethdev_ethdev_linux_ethtool.c.o 00:02:34.755 [131/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse.c.o 00:02:34.755 [132/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_mtr.c.o 00:02:34.755 [133/267] Compiling C object lib/librte_ethdev.a.p/ethdev_sff_8636.c.o 00:02:35.012 [134/267] Generating lib/pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.012 [135/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_portlist.c.o 00:02:35.012 [136/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_num.c.o 00:02:35.012 [137/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_string.c.o 00:02:35.012 [138/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_vt100.c.o 00:02:35.012 [139/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_socket.c.o 00:02:35.012 [140/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_os_unix.c.o 00:02:35.012 [141/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_tm.c.o 00:02:35.012 [142/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_rdline.c.o 00:02:35.012 [143/267] Compiling C object lib/librte_hash.a.p/hash_rte_hash_crc.c.o 00:02:35.270 [144/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_etheraddr.c.o 00:02:35.270 [145/267] Compiling C object lib/librte_cmdline.a.p/cmdline_cmdline_parse_ipaddr.c.o 00:02:35.270 [146/267] Linking static target lib/librte_cmdline.a 00:02:35.270 [147/267] Compiling C object lib/librte_hash.a.p/hash_rte_fbk_hash.c.o 00:02:35.270 [148/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_flow.c.o 00:02:35.270 [149/267] Compiling C object lib/librte_timer.a.p/timer_rte_timer.c.o 00:02:35.270 [150/267] Linking static target lib/librte_timer.a 00:02:35.527 [151/267] Compiling C object lib/librte_ethdev.a.p/ethdev_rte_ethdev.c.o 00:02:35.527 [152/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash_gfni.c.o 00:02:35.527 [153/267] Linking static target lib/librte_ethdev.a 00:02:35.527 [154/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev_pmd.c.o 00:02:35.527 [155/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_compressdev.c.o 00:02:35.527 [156/267] Compiling C object lib/librte_hash.a.p/hash_rte_thash.c.o 00:02:35.785 [157/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_pmd.c.o 00:02:35.785 [158/267] Compiling C object lib/librte_compressdev.a.p/compressdev_rte_comp.c.o 00:02:35.785 [159/267] Linking static target lib/librte_compressdev.a 00:02:35.785 [160/267] Generating lib/timer.sym_chk with a custom command (wrapped by meson to capture output) 00:02:35.785 [161/267] Compiling C object lib/librte_hash.a.p/hash_rte_cuckoo_hash.c.o 00:02:35.785 [162/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_cryptodev_trace_points.c.o 00:02:35.785 [163/267] Linking static target lib/librte_hash.a 00:02:35.785 [164/267] Compiling C object lib/librte_power.a.p/power_guest_channel.c.o 00:02:36.042 [165/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev.c.o 00:02:36.042 [166/267] Compiling C object lib/librte_power.a.p/power_power_common.c.o 00:02:36.042 [167/267] Compiling C object lib/librte_dmadev.a.p/dmadev_rte_dmadev_trace_points.c.o 00:02:36.042 [168/267] Linking static target lib/librte_dmadev.a 00:02:36.042 [169/267] Compiling C object lib/librte_power.a.p/power_power_acpi_cpufreq.c.o 00:02:36.300 [170/267] Compiling C object lib/librte_power.a.p/power_power_kvm_vm.c.o 00:02:36.300 [171/267] Compiling C object lib/librte_power.a.p/power_power_amd_pstate_cpufreq.c.o 00:02:36.300 [172/267] Generating lib/cmdline.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.300 [173/267] Compiling C object lib/librte_power.a.p/power_power_cppc_cpufreq.c.o 00:02:36.558 [174/267] Generating lib/compressdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.558 [175/267] Compiling C object lib/librte_power.a.p/power_power_intel_uncore.c.o 00:02:36.558 [176/267] Compiling C object lib/librte_power.a.p/power_power_pstate_cpufreq.c.o 00:02:36.558 [177/267] Compiling C object lib/librte_cryptodev.a.p/cryptodev_rte_cryptodev.c.o 00:02:36.558 [178/267] Compiling C object lib/librte_power.a.p/power_rte_power.c.o 00:02:36.558 [179/267] Linking static target lib/librte_cryptodev.a 00:02:36.558 [180/267] Compiling C object lib/librte_power.a.p/power_rte_power_uncore.c.o 00:02:36.558 [181/267] Compiling C object lib/librte_vhost.a.p/vhost_fd_man.c.o 00:02:36.558 [182/267] Generating lib/dmadev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.558 [183/267] Generating lib/hash.sym_chk with a custom command (wrapped by meson to capture output) 00:02:36.815 [184/267] Compiling C object lib/librte_power.a.p/power_rte_power_pmd_mgmt.c.o 00:02:36.815 [185/267] Linking static target lib/librte_power.a 00:02:37.073 [186/267] Compiling C object lib/librte_reorder.a.p/reorder_rte_reorder.c.o 00:02:37.073 [187/267] Compiling C object lib/librte_vhost.a.p/vhost_iotlb.c.o 00:02:37.073 [188/267] Linking static target lib/librte_reorder.a 00:02:37.073 [189/267] Compiling C object lib/librte_vhost.a.p/vhost_socket.c.o 00:02:37.073 [190/267] Compiling C object lib/librte_vhost.a.p/vhost_vdpa.c.o 00:02:37.073 [191/267] Compiling C object lib/librte_security.a.p/security_rte_security.c.o 00:02:37.073 [192/267] Linking static target lib/librte_security.a 00:02:37.330 [193/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost.c.o 00:02:37.330 [194/267] Generating lib/reorder.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.587 [195/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net_ctrl.c.o 00:02:37.587 [196/267] Generating lib/power.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.587 [197/267] Generating lib/security.sym_chk with a custom command (wrapped by meson to capture output) 00:02:37.587 [198/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_params.c.o 00:02:37.587 [199/267] Compiling C object lib/librte_vhost.a.p/vhost_vduse.c.o 00:02:37.587 [200/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_user.c.o 00:02:37.845 [201/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common.c.o 00:02:38.103 [202/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_pci_common_uio.c.o 00:02:38.103 [203/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_uio.c.o 00:02:38.103 [204/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci.c.o 00:02:38.103 [205/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev_params.c.o 00:02:38.103 [206/267] Compiling C object drivers/libtmp_rte_bus_pci.a.p/bus_pci_linux_pci_vfio.c.o 00:02:38.103 [207/267] Linking static target drivers/libtmp_rte_bus_pci.a 00:02:38.103 [208/267] Compiling C object drivers/libtmp_rte_bus_vdev.a.p/bus_vdev_vdev.c.o 00:02:38.362 [209/267] Linking static target drivers/libtmp_rte_bus_vdev.a 00:02:38.362 [210/267] Generating lib/cryptodev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.362 [211/267] Generating drivers/rte_bus_vdev.pmd.c with a custom command 00:02:38.362 [212/267] Compiling C object drivers/librte_bus_vdev.a.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.362 [213/267] Generating drivers/rte_bus_pci.pmd.c with a custom command 00:02:38.362 [214/267] Compiling C object drivers/librte_bus_vdev.so.24.1.p/meson-generated_.._rte_bus_vdev.pmd.c.o 00:02:38.362 [215/267] Linking static target drivers/librte_bus_vdev.a 00:02:38.362 [216/267] Compiling C object drivers/librte_bus_pci.a.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.362 [217/267] Linking static target drivers/librte_bus_pci.a 00:02:38.362 [218/267] Compiling C object drivers/librte_bus_pci.so.24.1.p/meson-generated_.._rte_bus_pci.pmd.c.o 00:02:38.362 [219/267] Compiling C object drivers/libtmp_rte_mempool_ring.a.p/mempool_ring_rte_mempool_ring.c.o 00:02:38.362 [220/267] Linking static target drivers/libtmp_rte_mempool_ring.a 00:02:38.620 [221/267] Generating drivers/rte_mempool_ring.pmd.c with a custom command 00:02:38.620 [222/267] Compiling C object drivers/librte_mempool_ring.a.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.620 [223/267] Compiling C object drivers/librte_mempool_ring.so.24.1.p/meson-generated_.._rte_mempool_ring.pmd.c.o 00:02:38.620 [224/267] Linking static target drivers/librte_mempool_ring.a 00:02:38.620 [225/267] Generating drivers/rte_bus_vdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:38.878 [226/267] Generating drivers/rte_bus_pci.sym_chk with a custom command (wrapped by meson to capture output) 00:02:39.137 [227/267] Compiling C object lib/librte_vhost.a.p/vhost_vhost_crypto.c.o 00:02:40.073 [228/267] Generating lib/eal.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.073 [229/267] Linking target lib/librte_eal.so.24.1 00:02:40.073 [230/267] Generating symbol file lib/librte_eal.so.24.1.p/librte_eal.so.24.1.symbols 00:02:40.073 [231/267] Linking target lib/librte_timer.so.24.1 00:02:40.073 [232/267] Linking target lib/librte_pci.so.24.1 00:02:40.073 [233/267] Linking target lib/librte_ring.so.24.1 00:02:40.073 [234/267] Linking target lib/librte_meter.so.24.1 00:02:40.073 [235/267] Linking target lib/librte_dmadev.so.24.1 00:02:40.073 [236/267] Linking target drivers/librte_bus_vdev.so.24.1 00:02:40.331 [237/267] Generating symbol file lib/librte_ring.so.24.1.p/librte_ring.so.24.1.symbols 00:02:40.331 [238/267] Generating symbol file lib/librte_timer.so.24.1.p/librte_timer.so.24.1.symbols 00:02:40.331 [239/267] Generating symbol file lib/librte_meter.so.24.1.p/librte_meter.so.24.1.symbols 00:02:40.331 [240/267] Generating symbol file lib/librte_dmadev.so.24.1.p/librte_dmadev.so.24.1.symbols 00:02:40.331 [241/267] Linking target lib/librte_mempool.so.24.1 00:02:40.331 [242/267] Linking target lib/librte_rcu.so.24.1 00:02:40.331 [243/267] Generating symbol file lib/librte_pci.so.24.1.p/librte_pci.so.24.1.symbols 00:02:40.331 [244/267] Linking target drivers/librte_bus_pci.so.24.1 00:02:40.331 [245/267] Generating symbol file lib/librte_rcu.so.24.1.p/librte_rcu.so.24.1.symbols 00:02:40.331 [246/267] Generating symbol file lib/librte_mempool.so.24.1.p/librte_mempool.so.24.1.symbols 00:02:40.331 [247/267] Linking target lib/librte_mbuf.so.24.1 00:02:40.331 [248/267] Linking target drivers/librte_mempool_ring.so.24.1 00:02:40.589 [249/267] Generating symbol file lib/librte_mbuf.so.24.1.p/librte_mbuf.so.24.1.symbols 00:02:40.589 [250/267] Linking target lib/librte_net.so.24.1 00:02:40.589 [251/267] Linking target lib/librte_compressdev.so.24.1 00:02:40.589 [252/267] Linking target lib/librte_cryptodev.so.24.1 00:02:40.589 [253/267] Linking target lib/librte_reorder.so.24.1 00:02:40.589 [254/267] Generating symbol file lib/librte_net.so.24.1.p/librte_net.so.24.1.symbols 00:02:40.589 [255/267] Generating symbol file lib/librte_cryptodev.so.24.1.p/librte_cryptodev.so.24.1.symbols 00:02:40.589 [256/267] Linking target lib/librte_cmdline.so.24.1 00:02:40.589 [257/267] Linking target lib/librte_hash.so.24.1 00:02:40.589 [258/267] Linking target lib/librte_security.so.24.1 00:02:40.858 [259/267] Generating lib/ethdev.sym_chk with a custom command (wrapped by meson to capture output) 00:02:40.858 [260/267] Generating symbol file lib/librte_hash.so.24.1.p/librte_hash.so.24.1.symbols 00:02:40.858 [261/267] Linking target lib/librte_ethdev.so.24.1 00:02:40.858 [262/267] Generating symbol file lib/librte_ethdev.so.24.1.p/librte_ethdev.so.24.1.symbols 00:02:40.858 [263/267] Linking target lib/librte_power.so.24.1 00:02:42.248 [264/267] Compiling C object lib/librte_vhost.a.p/vhost_virtio_net.c.o 00:02:42.248 [265/267] Linking static target lib/librte_vhost.a 00:02:43.621 [266/267] Generating lib/vhost.sym_chk with a custom command (wrapped by meson to capture output) 00:02:43.621 [267/267] Linking target lib/librte_vhost.so.24.1 00:02:43.621 INFO: autodetecting backend as ninja 00:02:43.622 INFO: calculating backend command to run: /usr/local/bin/ninja -C /home/vagrant/spdk_repo/spdk/dpdk/build-tmp -j 10 00:02:58.491 CC lib/ut_mock/mock.o 00:02:58.491 CC lib/log/log.o 00:02:58.491 CC lib/log/log_flags.o 00:02:58.491 CC lib/ut/ut.o 00:02:58.491 CC lib/log/log_deprecated.o 00:02:58.491 LIB libspdk_ut_mock.a 00:02:58.491 LIB libspdk_log.a 00:02:58.491 LIB libspdk_ut.a 00:02:58.491 SO libspdk_ut_mock.so.6.0 00:02:58.491 SO libspdk_ut.so.2.0 00:02:58.491 SO libspdk_log.so.7.1 00:02:58.491 SYMLINK libspdk_ut_mock.so 00:02:58.491 SYMLINK libspdk_ut.so 00:02:58.491 SYMLINK libspdk_log.so 00:02:58.491 CC lib/util/base64.o 00:02:58.491 CC lib/util/bit_array.o 00:02:58.491 CC lib/util/cpuset.o 00:02:58.491 CC lib/util/crc16.o 00:02:58.491 CC lib/util/crc32.o 00:02:58.491 CC lib/util/crc32c.o 00:02:58.491 CC lib/ioat/ioat.o 00:02:58.491 CC lib/dma/dma.o 00:02:58.491 CXX lib/trace_parser/trace.o 00:02:58.491 CC lib/vfio_user/host/vfio_user_pci.o 00:02:58.491 CC lib/util/crc32_ieee.o 00:02:58.491 CC lib/util/crc64.o 00:02:58.491 CC lib/vfio_user/host/vfio_user.o 00:02:58.491 CC lib/util/dif.o 00:02:58.491 CC lib/util/fd.o 00:02:58.491 LIB libspdk_dma.a 00:02:58.491 CC lib/util/fd_group.o 00:02:58.491 CC lib/util/file.o 00:02:58.491 CC lib/util/hexlify.o 00:02:58.491 SO libspdk_dma.so.5.0 00:02:58.491 SYMLINK libspdk_dma.so 00:02:58.491 CC lib/util/iov.o 00:02:58.491 LIB libspdk_ioat.a 00:02:58.491 SO libspdk_ioat.so.7.0 00:02:58.491 CC lib/util/math.o 00:02:58.491 CC lib/util/net.o 00:02:58.492 CC lib/util/pipe.o 00:02:58.492 LIB libspdk_vfio_user.a 00:02:58.492 CC lib/util/strerror_tls.o 00:02:58.492 SYMLINK libspdk_ioat.so 00:02:58.492 CC lib/util/string.o 00:02:58.492 SO libspdk_vfio_user.so.5.0 00:02:58.492 SYMLINK libspdk_vfio_user.so 00:02:58.492 CC lib/util/uuid.o 00:02:58.492 CC lib/util/xor.o 00:02:58.492 CC lib/util/zipf.o 00:02:58.492 CC lib/util/md5.o 00:02:58.492 LIB libspdk_util.a 00:02:58.492 SO libspdk_util.so.10.1 00:02:58.492 LIB libspdk_trace_parser.a 00:02:58.492 SO libspdk_trace_parser.so.6.0 00:02:58.492 SYMLINK libspdk_util.so 00:02:58.492 SYMLINK libspdk_trace_parser.so 00:02:58.492 CC lib/env_dpdk/env.o 00:02:58.492 CC lib/conf/conf.o 00:02:58.492 CC lib/env_dpdk/pci.o 00:02:58.492 CC lib/env_dpdk/memory.o 00:02:58.492 CC lib/idxd/idxd.o 00:02:58.492 CC lib/idxd/idxd_user.o 00:02:58.492 CC lib/env_dpdk/init.o 00:02:58.492 CC lib/rdma_utils/rdma_utils.o 00:02:58.492 CC lib/json/json_parse.o 00:02:58.492 CC lib/vmd/vmd.o 00:02:58.492 LIB libspdk_conf.a 00:02:58.492 SO libspdk_conf.so.6.0 00:02:58.492 CC lib/json/json_util.o 00:02:58.492 CC lib/idxd/idxd_kernel.o 00:02:58.492 LIB libspdk_rdma_utils.a 00:02:58.492 SO libspdk_rdma_utils.so.1.0 00:02:58.492 SYMLINK libspdk_conf.so 00:02:58.492 CC lib/vmd/led.o 00:02:58.492 SYMLINK libspdk_rdma_utils.so 00:02:58.492 CC lib/json/json_write.o 00:02:58.492 CC lib/env_dpdk/threads.o 00:02:58.492 CC lib/env_dpdk/pci_ioat.o 00:02:58.492 CC lib/env_dpdk/pci_virtio.o 00:02:58.492 CC lib/env_dpdk/pci_vmd.o 00:02:58.492 CC lib/env_dpdk/pci_idxd.o 00:02:58.492 CC lib/env_dpdk/pci_event.o 00:02:58.492 CC lib/env_dpdk/sigbus_handler.o 00:02:58.492 CC lib/env_dpdk/pci_dpdk.o 00:02:58.492 CC lib/env_dpdk/pci_dpdk_2207.o 00:02:58.492 CC lib/env_dpdk/pci_dpdk_2211.o 00:02:58.492 LIB libspdk_json.a 00:02:58.492 LIB libspdk_idxd.a 00:02:58.492 SO libspdk_json.so.6.0 00:02:58.492 LIB libspdk_vmd.a 00:02:58.492 SO libspdk_idxd.so.12.1 00:02:58.492 SYMLINK libspdk_json.so 00:02:58.492 SO libspdk_vmd.so.6.0 00:02:58.492 SYMLINK libspdk_idxd.so 00:02:58.492 CC lib/rdma_provider/common.o 00:02:58.492 CC lib/rdma_provider/rdma_provider_verbs.o 00:02:58.492 SYMLINK libspdk_vmd.so 00:02:58.492 CC lib/jsonrpc/jsonrpc_server.o 00:02:58.492 CC lib/jsonrpc/jsonrpc_client.o 00:02:58.492 CC lib/jsonrpc/jsonrpc_client_tcp.o 00:02:58.492 CC lib/jsonrpc/jsonrpc_server_tcp.o 00:02:58.492 LIB libspdk_rdma_provider.a 00:02:58.492 SO libspdk_rdma_provider.so.7.0 00:02:58.492 SYMLINK libspdk_rdma_provider.so 00:02:58.750 LIB libspdk_jsonrpc.a 00:02:58.750 SO libspdk_jsonrpc.so.6.0 00:02:58.750 SYMLINK libspdk_jsonrpc.so 00:02:59.008 LIB libspdk_env_dpdk.a 00:02:59.008 SO libspdk_env_dpdk.so.15.1 00:02:59.008 CC lib/rpc/rpc.o 00:02:59.008 SYMLINK libspdk_env_dpdk.so 00:02:59.267 LIB libspdk_rpc.a 00:02:59.267 SO libspdk_rpc.so.6.0 00:02:59.267 SYMLINK libspdk_rpc.so 00:02:59.524 CC lib/trace/trace.o 00:02:59.524 CC lib/trace/trace_flags.o 00:02:59.524 CC lib/trace/trace_rpc.o 00:02:59.524 CC lib/keyring/keyring.o 00:02:59.524 CC lib/keyring/keyring_rpc.o 00:02:59.524 CC lib/notify/notify.o 00:02:59.524 CC lib/notify/notify_rpc.o 00:02:59.524 LIB libspdk_notify.a 00:02:59.783 SO libspdk_notify.so.6.0 00:02:59.783 LIB libspdk_keyring.a 00:02:59.783 SYMLINK libspdk_notify.so 00:02:59.783 LIB libspdk_trace.a 00:02:59.783 SO libspdk_keyring.so.2.0 00:02:59.783 SO libspdk_trace.so.11.0 00:02:59.783 SYMLINK libspdk_keyring.so 00:02:59.783 SYMLINK libspdk_trace.so 00:03:00.041 CC lib/thread/thread.o 00:03:00.041 CC lib/thread/iobuf.o 00:03:00.041 CC lib/sock/sock_rpc.o 00:03:00.041 CC lib/sock/sock.o 00:03:00.607 LIB libspdk_sock.a 00:03:00.607 SO libspdk_sock.so.10.0 00:03:00.607 SYMLINK libspdk_sock.so 00:03:00.865 CC lib/nvme/nvme_ctrlr_cmd.o 00:03:00.865 CC lib/nvme/nvme_ctrlr.o 00:03:00.865 CC lib/nvme/nvme_ns_cmd.o 00:03:00.865 CC lib/nvme/nvme_fabric.o 00:03:00.865 CC lib/nvme/nvme_pcie.o 00:03:00.865 CC lib/nvme/nvme.o 00:03:00.865 CC lib/nvme/nvme_qpair.o 00:03:00.865 CC lib/nvme/nvme_ns.o 00:03:00.865 CC lib/nvme/nvme_pcie_common.o 00:03:01.431 CC lib/nvme/nvme_quirks.o 00:03:01.431 CC lib/nvme/nvme_transport.o 00:03:01.431 CC lib/nvme/nvme_discovery.o 00:03:01.431 CC lib/nvme/nvme_ctrlr_ocssd_cmd.o 00:03:01.431 LIB libspdk_thread.a 00:03:01.431 CC lib/nvme/nvme_ns_ocssd_cmd.o 00:03:01.689 SO libspdk_thread.so.11.0 00:03:01.689 CC lib/nvme/nvme_tcp.o 00:03:01.689 CC lib/nvme/nvme_opal.o 00:03:01.689 SYMLINK libspdk_thread.so 00:03:01.689 CC lib/nvme/nvme_io_msg.o 00:03:01.689 CC lib/nvme/nvme_poll_group.o 00:03:01.947 CC lib/nvme/nvme_zns.o 00:03:01.947 CC lib/nvme/nvme_stubs.o 00:03:01.947 CC lib/nvme/nvme_auth.o 00:03:01.947 CC lib/nvme/nvme_cuse.o 00:03:01.947 CC lib/nvme/nvme_rdma.o 00:03:02.205 CC lib/accel/accel.o 00:03:02.205 CC lib/blob/blobstore.o 00:03:02.205 CC lib/blob/request.o 00:03:02.463 CC lib/blob/blob_bs_dev.o 00:03:02.463 CC lib/blob/zeroes.o 00:03:02.463 CC lib/accel/accel_rpc.o 00:03:02.721 CC lib/accel/accel_sw.o 00:03:02.721 CC lib/init/json_config.o 00:03:02.721 CC lib/init/subsystem.o 00:03:02.979 CC lib/virtio/virtio.o 00:03:02.979 CC lib/fsdev/fsdev.o 00:03:02.979 CC lib/fsdev/fsdev_io.o 00:03:02.979 CC lib/virtio/virtio_vhost_user.o 00:03:02.979 CC lib/fsdev/fsdev_rpc.o 00:03:02.979 CC lib/virtio/virtio_vfio_user.o 00:03:02.979 CC lib/init/subsystem_rpc.o 00:03:02.979 CC lib/init/rpc.o 00:03:03.237 CC lib/virtio/virtio_pci.o 00:03:03.237 LIB libspdk_init.a 00:03:03.237 SO libspdk_init.so.6.0 00:03:03.237 LIB libspdk_nvme.a 00:03:03.237 SYMLINK libspdk_init.so 00:03:03.495 LIB libspdk_accel.a 00:03:03.495 LIB libspdk_fsdev.a 00:03:03.495 SO libspdk_nvme.so.15.0 00:03:03.495 SO libspdk_accel.so.16.0 00:03:03.495 LIB libspdk_virtio.a 00:03:03.495 SO libspdk_fsdev.so.2.0 00:03:03.495 SO libspdk_virtio.so.7.0 00:03:03.495 SYMLINK libspdk_fsdev.so 00:03:03.495 SYMLINK libspdk_accel.so 00:03:03.495 CC lib/event/app.o 00:03:03.495 CC lib/event/reactor.o 00:03:03.495 CC lib/event/app_rpc.o 00:03:03.495 CC lib/event/log_rpc.o 00:03:03.495 CC lib/event/scheduler_static.o 00:03:03.495 SYMLINK libspdk_virtio.so 00:03:03.753 SYMLINK libspdk_nvme.so 00:03:03.753 CC lib/fuse_dispatcher/fuse_dispatcher.o 00:03:03.753 CC lib/bdev/bdev.o 00:03:03.753 CC lib/bdev/bdev_zone.o 00:03:03.753 CC lib/bdev/part.o 00:03:03.753 CC lib/bdev/bdev_rpc.o 00:03:03.753 CC lib/bdev/scsi_nvme.o 00:03:04.012 LIB libspdk_event.a 00:03:04.012 SO libspdk_event.so.14.0 00:03:04.012 SYMLINK libspdk_event.so 00:03:04.269 LIB libspdk_fuse_dispatcher.a 00:03:04.269 SO libspdk_fuse_dispatcher.so.1.0 00:03:04.269 SYMLINK libspdk_fuse_dispatcher.so 00:03:05.203 LIB libspdk_blob.a 00:03:05.203 SO libspdk_blob.so.12.0 00:03:05.203 SYMLINK libspdk_blob.so 00:03:05.459 CC lib/lvol/lvol.o 00:03:05.459 CC lib/blobfs/blobfs.o 00:03:05.459 CC lib/blobfs/tree.o 00:03:06.077 LIB libspdk_bdev.a 00:03:06.077 SO libspdk_bdev.so.17.0 00:03:06.077 SYMLINK libspdk_bdev.so 00:03:06.077 CC lib/scsi/dev.o 00:03:06.077 CC lib/scsi/lun.o 00:03:06.077 CC lib/scsi/port.o 00:03:06.077 CC lib/scsi/scsi.o 00:03:06.077 CC lib/nbd/nbd.o 00:03:06.077 CC lib/ublk/ublk.o 00:03:06.077 CC lib/nvmf/ctrlr.o 00:03:06.077 CC lib/ftl/ftl_core.o 00:03:06.336 CC lib/ftl/ftl_init.o 00:03:06.336 CC lib/ftl/ftl_layout.o 00:03:06.336 LIB libspdk_blobfs.a 00:03:06.336 SO libspdk_blobfs.so.11.0 00:03:06.336 LIB libspdk_lvol.a 00:03:06.336 CC lib/ftl/ftl_debug.o 00:03:06.336 SYMLINK libspdk_blobfs.so 00:03:06.336 CC lib/nvmf/ctrlr_discovery.o 00:03:06.336 SO libspdk_lvol.so.11.0 00:03:06.336 CC lib/scsi/scsi_bdev.o 00:03:06.594 SYMLINK libspdk_lvol.so 00:03:06.594 CC lib/scsi/scsi_pr.o 00:03:06.594 CC lib/scsi/scsi_rpc.o 00:03:06.594 CC lib/nvmf/ctrlr_bdev.o 00:03:06.594 CC lib/nbd/nbd_rpc.o 00:03:06.594 CC lib/nvmf/subsystem.o 00:03:06.594 CC lib/nvmf/nvmf.o 00:03:06.594 CC lib/ftl/ftl_io.o 00:03:06.594 LIB libspdk_nbd.a 00:03:06.852 SO libspdk_nbd.so.7.0 00:03:06.852 CC lib/ftl/ftl_sb.o 00:03:06.852 SYMLINK libspdk_nbd.so 00:03:06.852 CC lib/ublk/ublk_rpc.o 00:03:06.852 CC lib/ftl/ftl_l2p.o 00:03:06.852 CC lib/nvmf/nvmf_rpc.o 00:03:06.852 CC lib/scsi/task.o 00:03:06.852 CC lib/nvmf/transport.o 00:03:06.852 LIB libspdk_ublk.a 00:03:06.852 CC lib/ftl/ftl_l2p_flat.o 00:03:07.110 SO libspdk_ublk.so.3.0 00:03:07.110 CC lib/nvmf/tcp.o 00:03:07.110 LIB libspdk_scsi.a 00:03:07.110 SYMLINK libspdk_ublk.so 00:03:07.110 CC lib/nvmf/stubs.o 00:03:07.110 SO libspdk_scsi.so.9.0 00:03:07.110 CC lib/ftl/ftl_nv_cache.o 00:03:07.110 CC lib/nvmf/mdns_server.o 00:03:07.110 SYMLINK libspdk_scsi.so 00:03:07.110 CC lib/nvmf/rdma.o 00:03:07.367 CC lib/nvmf/auth.o 00:03:07.367 CC lib/ftl/ftl_band.o 00:03:07.367 CC lib/ftl/ftl_band_ops.o 00:03:07.625 CC lib/ftl/ftl_writer.o 00:03:07.625 CC lib/iscsi/conn.o 00:03:07.625 CC lib/vhost/vhost.o 00:03:07.625 CC lib/vhost/vhost_rpc.o 00:03:07.883 CC lib/ftl/ftl_rq.o 00:03:07.883 CC lib/iscsi/init_grp.o 00:03:07.883 CC lib/iscsi/iscsi.o 00:03:07.883 CC lib/ftl/ftl_reloc.o 00:03:07.883 CC lib/ftl/ftl_l2p_cache.o 00:03:08.140 CC lib/iscsi/param.o 00:03:08.140 CC lib/iscsi/portal_grp.o 00:03:08.140 CC lib/ftl/ftl_p2l.o 00:03:08.140 CC lib/ftl/ftl_p2l_log.o 00:03:08.398 CC lib/iscsi/tgt_node.o 00:03:08.398 CC lib/iscsi/iscsi_subsystem.o 00:03:08.398 CC lib/ftl/mngt/ftl_mngt.o 00:03:08.398 CC lib/iscsi/iscsi_rpc.o 00:03:08.398 CC lib/iscsi/task.o 00:03:08.398 CC lib/ftl/mngt/ftl_mngt_bdev.o 00:03:08.398 CC lib/ftl/mngt/ftl_mngt_shutdown.o 00:03:08.398 CC lib/ftl/mngt/ftl_mngt_startup.o 00:03:08.656 CC lib/vhost/vhost_scsi.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_md.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_misc.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_ioch.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_l2p.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_band.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_self_test.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_p2l.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_recovery.o 00:03:08.656 CC lib/ftl/mngt/ftl_mngt_upgrade.o 00:03:08.656 CC lib/ftl/utils/ftl_conf.o 00:03:08.656 CC lib/ftl/utils/ftl_md.o 00:03:08.914 CC lib/ftl/utils/ftl_mempool.o 00:03:08.914 CC lib/ftl/utils/ftl_bitmap.o 00:03:08.914 CC lib/ftl/utils/ftl_property.o 00:03:08.914 CC lib/ftl/utils/ftl_layout_tracker_bdev.o 00:03:08.914 CC lib/ftl/upgrade/ftl_layout_upgrade.o 00:03:08.914 CC lib/ftl/upgrade/ftl_sb_upgrade.o 00:03:08.914 CC lib/ftl/upgrade/ftl_p2l_upgrade.o 00:03:08.914 CC lib/ftl/upgrade/ftl_band_upgrade.o 00:03:09.172 CC lib/ftl/upgrade/ftl_chunk_upgrade.o 00:03:09.172 CC lib/ftl/upgrade/ftl_trim_upgrade.o 00:03:09.172 CC lib/ftl/upgrade/ftl_sb_v3.o 00:03:09.172 CC lib/ftl/upgrade/ftl_sb_v5.o 00:03:09.172 CC lib/ftl/nvc/ftl_nvc_dev.o 00:03:09.172 LIB libspdk_iscsi.a 00:03:09.172 LIB libspdk_nvmf.a 00:03:09.172 CC lib/ftl/nvc/ftl_nvc_bdev_vss.o 00:03:09.172 SO libspdk_iscsi.so.8.0 00:03:09.172 CC lib/vhost/vhost_blk.o 00:03:09.172 CC lib/ftl/nvc/ftl_nvc_bdev_non_vss.o 00:03:09.172 CC lib/ftl/nvc/ftl_nvc_bdev_common.o 00:03:09.172 CC lib/ftl/base/ftl_base_dev.o 00:03:09.172 CC lib/vhost/rte_vhost_user.o 00:03:09.172 SO libspdk_nvmf.so.20.0 00:03:09.430 CC lib/ftl/base/ftl_base_bdev.o 00:03:09.430 CC lib/ftl/ftl_trace.o 00:03:09.430 SYMLINK libspdk_iscsi.so 00:03:09.430 SYMLINK libspdk_nvmf.so 00:03:09.430 LIB libspdk_ftl.a 00:03:09.688 SO libspdk_ftl.so.9.0 00:03:09.946 SYMLINK libspdk_ftl.so 00:03:09.946 LIB libspdk_vhost.a 00:03:10.203 SO libspdk_vhost.so.8.0 00:03:10.203 SYMLINK libspdk_vhost.so 00:03:10.464 CC module/env_dpdk/env_dpdk_rpc.o 00:03:10.464 CC module/accel/ioat/accel_ioat.o 00:03:10.464 CC module/accel/error/accel_error.o 00:03:10.464 CC module/accel/iaa/accel_iaa.o 00:03:10.464 CC module/fsdev/aio/fsdev_aio.o 00:03:10.464 CC module/keyring/file/keyring.o 00:03:10.464 CC module/accel/dsa/accel_dsa.o 00:03:10.464 CC module/sock/posix/posix.o 00:03:10.464 CC module/scheduler/dynamic/scheduler_dynamic.o 00:03:10.464 CC module/blob/bdev/blob_bdev.o 00:03:10.464 LIB libspdk_env_dpdk_rpc.a 00:03:10.464 SO libspdk_env_dpdk_rpc.so.6.0 00:03:10.722 SYMLINK libspdk_env_dpdk_rpc.so 00:03:10.722 CC module/fsdev/aio/fsdev_aio_rpc.o 00:03:10.722 CC module/keyring/file/keyring_rpc.o 00:03:10.722 CC module/accel/error/accel_error_rpc.o 00:03:10.722 CC module/accel/iaa/accel_iaa_rpc.o 00:03:10.722 LIB libspdk_scheduler_dynamic.a 00:03:10.722 SO libspdk_scheduler_dynamic.so.4.0 00:03:10.722 CC module/accel/ioat/accel_ioat_rpc.o 00:03:10.722 CC module/fsdev/aio/linux_aio_mgr.o 00:03:10.722 SYMLINK libspdk_scheduler_dynamic.so 00:03:10.722 LIB libspdk_keyring_file.a 00:03:10.722 LIB libspdk_accel_error.a 00:03:10.722 LIB libspdk_accel_iaa.a 00:03:10.722 LIB libspdk_accel_ioat.a 00:03:10.722 LIB libspdk_blob_bdev.a 00:03:10.722 CC module/accel/dsa/accel_dsa_rpc.o 00:03:10.722 SO libspdk_keyring_file.so.2.0 00:03:10.722 SO libspdk_accel_error.so.2.0 00:03:10.722 SO libspdk_accel_iaa.so.3.0 00:03:10.722 SO libspdk_blob_bdev.so.12.0 00:03:10.722 SO libspdk_accel_ioat.so.6.0 00:03:10.722 SYMLINK libspdk_keyring_file.so 00:03:10.722 SYMLINK libspdk_accel_error.so 00:03:10.722 SYMLINK libspdk_blob_bdev.so 00:03:10.722 CC module/scheduler/dpdk_governor/dpdk_governor.o 00:03:10.722 SYMLINK libspdk_accel_iaa.so 00:03:10.979 SYMLINK libspdk_accel_ioat.so 00:03:10.979 LIB libspdk_accel_dsa.a 00:03:10.979 SO libspdk_accel_dsa.so.5.0 00:03:10.979 LIB libspdk_scheduler_dpdk_governor.a 00:03:10.979 CC module/keyring/linux/keyring.o 00:03:10.979 CC module/scheduler/gscheduler/gscheduler.o 00:03:10.979 SYMLINK libspdk_accel_dsa.so 00:03:10.979 SO libspdk_scheduler_dpdk_governor.so.4.0 00:03:10.979 SYMLINK libspdk_scheduler_dpdk_governor.so 00:03:10.979 CC module/bdev/delay/vbdev_delay.o 00:03:10.979 CC module/bdev/error/vbdev_error.o 00:03:10.979 CC module/blobfs/bdev/blobfs_bdev.o 00:03:10.979 CC module/bdev/gpt/gpt.o 00:03:10.979 CC module/keyring/linux/keyring_rpc.o 00:03:11.237 LIB libspdk_scheduler_gscheduler.a 00:03:11.237 LIB libspdk_fsdev_aio.a 00:03:11.237 SO libspdk_scheduler_gscheduler.so.4.0 00:03:11.237 CC module/bdev/lvol/vbdev_lvol.o 00:03:11.237 SO libspdk_fsdev_aio.so.1.0 00:03:11.237 LIB libspdk_keyring_linux.a 00:03:11.237 CC module/bdev/malloc/bdev_malloc.o 00:03:11.237 SYMLINK libspdk_scheduler_gscheduler.so 00:03:11.237 CC module/bdev/lvol/vbdev_lvol_rpc.o 00:03:11.237 SO libspdk_keyring_linux.so.1.0 00:03:11.237 CC module/blobfs/bdev/blobfs_bdev_rpc.o 00:03:11.237 SYMLINK libspdk_fsdev_aio.so 00:03:11.237 CC module/bdev/gpt/vbdev_gpt.o 00:03:11.237 SYMLINK libspdk_keyring_linux.so 00:03:11.237 LIB libspdk_sock_posix.a 00:03:11.237 SO libspdk_sock_posix.so.6.0 00:03:11.237 CC module/bdev/error/vbdev_error_rpc.o 00:03:11.237 LIB libspdk_blobfs_bdev.a 00:03:11.237 CC module/bdev/null/bdev_null.o 00:03:11.495 SO libspdk_blobfs_bdev.so.6.0 00:03:11.495 CC module/bdev/delay/vbdev_delay_rpc.o 00:03:11.495 SYMLINK libspdk_sock_posix.so 00:03:11.495 CC module/bdev/nvme/bdev_nvme.o 00:03:11.495 SYMLINK libspdk_blobfs_bdev.so 00:03:11.495 CC module/bdev/null/bdev_null_rpc.o 00:03:11.495 LIB libspdk_bdev_gpt.a 00:03:11.495 LIB libspdk_bdev_error.a 00:03:11.495 SO libspdk_bdev_gpt.so.6.0 00:03:11.495 SO libspdk_bdev_error.so.6.0 00:03:11.495 CC module/bdev/malloc/bdev_malloc_rpc.o 00:03:11.495 LIB libspdk_bdev_delay.a 00:03:11.495 SYMLINK libspdk_bdev_gpt.so 00:03:11.495 SO libspdk_bdev_delay.so.6.0 00:03:11.495 SYMLINK libspdk_bdev_error.so 00:03:11.495 CC module/bdev/passthru/vbdev_passthru.o 00:03:11.495 CC module/bdev/passthru/vbdev_passthru_rpc.o 00:03:11.495 SYMLINK libspdk_bdev_delay.so 00:03:11.495 LIB libspdk_bdev_malloc.a 00:03:11.495 LIB libspdk_bdev_null.a 00:03:11.495 SO libspdk_bdev_malloc.so.6.0 00:03:11.753 SO libspdk_bdev_null.so.6.0 00:03:11.753 LIB libspdk_bdev_lvol.a 00:03:11.753 SYMLINK libspdk_bdev_malloc.so 00:03:11.753 CC module/bdev/raid/bdev_raid.o 00:03:11.753 SYMLINK libspdk_bdev_null.so 00:03:11.753 SO libspdk_bdev_lvol.so.6.0 00:03:11.753 CC module/bdev/zone_block/vbdev_zone_block.o 00:03:11.753 CC module/bdev/xnvme/bdev_xnvme.o 00:03:11.753 CC module/bdev/split/vbdev_split.o 00:03:11.753 SYMLINK libspdk_bdev_lvol.so 00:03:11.753 LIB libspdk_bdev_passthru.a 00:03:11.753 SO libspdk_bdev_passthru.so.6.0 00:03:11.753 CC module/bdev/aio/bdev_aio.o 00:03:11.753 CC module/bdev/ftl/bdev_ftl.o 00:03:11.753 CC module/bdev/iscsi/bdev_iscsi.o 00:03:11.753 SYMLINK libspdk_bdev_passthru.so 00:03:11.753 CC module/bdev/ftl/bdev_ftl_rpc.o 00:03:12.011 CC module/bdev/virtio/bdev_virtio_scsi.o 00:03:12.011 CC module/bdev/split/vbdev_split_rpc.o 00:03:12.011 CC module/bdev/zone_block/vbdev_zone_block_rpc.o 00:03:12.011 CC module/bdev/virtio/bdev_virtio_blk.o 00:03:12.011 CC module/bdev/xnvme/bdev_xnvme_rpc.o 00:03:12.011 LIB libspdk_bdev_ftl.a 00:03:12.011 SO libspdk_bdev_ftl.so.6.0 00:03:12.011 LIB libspdk_bdev_zone_block.a 00:03:12.011 CC module/bdev/aio/bdev_aio_rpc.o 00:03:12.011 LIB libspdk_bdev_split.a 00:03:12.011 SYMLINK libspdk_bdev_ftl.so 00:03:12.011 CC module/bdev/virtio/bdev_virtio_rpc.o 00:03:12.011 SO libspdk_bdev_zone_block.so.6.0 00:03:12.011 SO libspdk_bdev_split.so.6.0 00:03:12.270 SYMLINK libspdk_bdev_zone_block.so 00:03:12.270 LIB libspdk_bdev_xnvme.a 00:03:12.270 CC module/bdev/iscsi/bdev_iscsi_rpc.o 00:03:12.270 SYMLINK libspdk_bdev_split.so 00:03:12.270 SO libspdk_bdev_xnvme.so.3.0 00:03:12.270 CC module/bdev/raid/bdev_raid_rpc.o 00:03:12.270 CC module/bdev/raid/bdev_raid_sb.o 00:03:12.270 SYMLINK libspdk_bdev_xnvme.so 00:03:12.270 CC module/bdev/nvme/bdev_nvme_rpc.o 00:03:12.270 CC module/bdev/raid/raid0.o 00:03:12.270 LIB libspdk_bdev_aio.a 00:03:12.270 CC module/bdev/nvme/nvme_rpc.o 00:03:12.270 SO libspdk_bdev_aio.so.6.0 00:03:12.270 LIB libspdk_bdev_iscsi.a 00:03:12.270 SYMLINK libspdk_bdev_aio.so 00:03:12.270 CC module/bdev/nvme/bdev_mdns_client.o 00:03:12.270 CC module/bdev/nvme/vbdev_opal.o 00:03:12.270 SO libspdk_bdev_iscsi.so.6.0 00:03:12.270 CC module/bdev/nvme/vbdev_opal_rpc.o 00:03:12.561 SYMLINK libspdk_bdev_iscsi.so 00:03:12.561 CC module/bdev/nvme/bdev_nvme_cuse_rpc.o 00:03:12.561 LIB libspdk_bdev_virtio.a 00:03:12.561 CC module/bdev/raid/raid1.o 00:03:12.561 CC module/bdev/raid/concat.o 00:03:12.561 SO libspdk_bdev_virtio.so.6.0 00:03:12.561 SYMLINK libspdk_bdev_virtio.so 00:03:12.561 LIB libspdk_bdev_raid.a 00:03:12.819 SO libspdk_bdev_raid.so.6.0 00:03:12.819 SYMLINK libspdk_bdev_raid.so 00:03:13.753 LIB libspdk_bdev_nvme.a 00:03:13.753 SO libspdk_bdev_nvme.so.7.1 00:03:14.011 SYMLINK libspdk_bdev_nvme.so 00:03:14.270 CC module/event/subsystems/iobuf/iobuf.o 00:03:14.270 CC module/event/subsystems/iobuf/iobuf_rpc.o 00:03:14.270 CC module/event/subsystems/scheduler/scheduler.o 00:03:14.270 CC module/event/subsystems/vhost_blk/vhost_blk.o 00:03:14.270 CC module/event/subsystems/sock/sock.o 00:03:14.270 CC module/event/subsystems/vmd/vmd.o 00:03:14.270 CC module/event/subsystems/vmd/vmd_rpc.o 00:03:14.270 CC module/event/subsystems/keyring/keyring.o 00:03:14.270 CC module/event/subsystems/fsdev/fsdev.o 00:03:14.527 LIB libspdk_event_vhost_blk.a 00:03:14.528 LIB libspdk_event_vmd.a 00:03:14.528 LIB libspdk_event_keyring.a 00:03:14.528 LIB libspdk_event_scheduler.a 00:03:14.528 LIB libspdk_event_iobuf.a 00:03:14.528 LIB libspdk_event_sock.a 00:03:14.528 SO libspdk_event_vhost_blk.so.3.0 00:03:14.528 SO libspdk_event_keyring.so.1.0 00:03:14.528 SO libspdk_event_scheduler.so.4.0 00:03:14.528 LIB libspdk_event_fsdev.a 00:03:14.528 SO libspdk_event_vmd.so.6.0 00:03:14.528 SO libspdk_event_iobuf.so.3.0 00:03:14.528 SO libspdk_event_sock.so.5.0 00:03:14.528 SO libspdk_event_fsdev.so.1.0 00:03:14.528 SYMLINK libspdk_event_vhost_blk.so 00:03:14.528 SYMLINK libspdk_event_keyring.so 00:03:14.528 SYMLINK libspdk_event_scheduler.so 00:03:14.528 SYMLINK libspdk_event_vmd.so 00:03:14.528 SYMLINK libspdk_event_iobuf.so 00:03:14.528 SYMLINK libspdk_event_fsdev.so 00:03:14.528 SYMLINK libspdk_event_sock.so 00:03:14.785 CC module/event/subsystems/accel/accel.o 00:03:14.786 LIB libspdk_event_accel.a 00:03:14.786 SO libspdk_event_accel.so.6.0 00:03:15.044 SYMLINK libspdk_event_accel.so 00:03:15.301 CC module/event/subsystems/bdev/bdev.o 00:03:15.301 LIB libspdk_event_bdev.a 00:03:15.301 SO libspdk_event_bdev.so.6.0 00:03:15.301 SYMLINK libspdk_event_bdev.so 00:03:15.559 CC module/event/subsystems/ublk/ublk.o 00:03:15.559 CC module/event/subsystems/nbd/nbd.o 00:03:15.559 CC module/event/subsystems/scsi/scsi.o 00:03:15.559 CC module/event/subsystems/nvmf/nvmf_rpc.o 00:03:15.559 CC module/event/subsystems/nvmf/nvmf_tgt.o 00:03:15.817 LIB libspdk_event_ublk.a 00:03:15.817 LIB libspdk_event_nbd.a 00:03:15.817 LIB libspdk_event_scsi.a 00:03:15.817 SO libspdk_event_ublk.so.3.0 00:03:15.817 SO libspdk_event_nbd.so.6.0 00:03:15.817 SO libspdk_event_scsi.so.6.0 00:03:15.817 SYMLINK libspdk_event_ublk.so 00:03:15.817 SYMLINK libspdk_event_nbd.so 00:03:15.817 SYMLINK libspdk_event_scsi.so 00:03:15.817 LIB libspdk_event_nvmf.a 00:03:15.817 SO libspdk_event_nvmf.so.6.0 00:03:15.817 SYMLINK libspdk_event_nvmf.so 00:03:16.076 CC module/event/subsystems/iscsi/iscsi.o 00:03:16.076 CC module/event/subsystems/vhost_scsi/vhost_scsi.o 00:03:16.076 LIB libspdk_event_vhost_scsi.a 00:03:16.076 SO libspdk_event_vhost_scsi.so.3.0 00:03:16.076 LIB libspdk_event_iscsi.a 00:03:16.076 SYMLINK libspdk_event_vhost_scsi.so 00:03:16.076 SO libspdk_event_iscsi.so.6.0 00:03:16.334 SYMLINK libspdk_event_iscsi.so 00:03:16.334 SO libspdk.so.6.0 00:03:16.334 SYMLINK libspdk.so 00:03:16.592 CC app/spdk_nvme_identify/identify.o 00:03:16.592 CC app/trace_record/trace_record.o 00:03:16.592 CXX app/trace/trace.o 00:03:16.592 CC app/spdk_lspci/spdk_lspci.o 00:03:16.592 CC app/spdk_nvme_perf/perf.o 00:03:16.592 CC app/iscsi_tgt/iscsi_tgt.o 00:03:16.592 CC app/nvmf_tgt/nvmf_main.o 00:03:16.592 CC app/spdk_tgt/spdk_tgt.o 00:03:16.592 CC examples/util/zipf/zipf.o 00:03:16.592 CC test/thread/poller_perf/poller_perf.o 00:03:16.592 LINK spdk_lspci 00:03:16.850 LINK iscsi_tgt 00:03:16.850 LINK nvmf_tgt 00:03:16.850 LINK poller_perf 00:03:16.850 LINK zipf 00:03:16.850 LINK spdk_trace_record 00:03:16.850 LINK spdk_tgt 00:03:16.850 CC app/spdk_nvme_discover/discovery_aer.o 00:03:16.850 LINK spdk_trace 00:03:17.109 TEST_HEADER include/spdk/accel.h 00:03:17.109 TEST_HEADER include/spdk/accel_module.h 00:03:17.109 TEST_HEADER include/spdk/assert.h 00:03:17.109 TEST_HEADER include/spdk/barrier.h 00:03:17.109 TEST_HEADER include/spdk/base64.h 00:03:17.109 CC app/spdk_top/spdk_top.o 00:03:17.109 TEST_HEADER include/spdk/bdev.h 00:03:17.109 TEST_HEADER include/spdk/bdev_module.h 00:03:17.109 TEST_HEADER include/spdk/bdev_zone.h 00:03:17.109 TEST_HEADER include/spdk/bit_array.h 00:03:17.109 TEST_HEADER include/spdk/bit_pool.h 00:03:17.109 TEST_HEADER include/spdk/blob_bdev.h 00:03:17.109 TEST_HEADER include/spdk/blobfs_bdev.h 00:03:17.109 TEST_HEADER include/spdk/blobfs.h 00:03:17.109 TEST_HEADER include/spdk/blob.h 00:03:17.109 TEST_HEADER include/spdk/conf.h 00:03:17.109 TEST_HEADER include/spdk/config.h 00:03:17.109 TEST_HEADER include/spdk/cpuset.h 00:03:17.109 TEST_HEADER include/spdk/crc16.h 00:03:17.109 TEST_HEADER include/spdk/crc32.h 00:03:17.109 TEST_HEADER include/spdk/crc64.h 00:03:17.109 TEST_HEADER include/spdk/dif.h 00:03:17.109 TEST_HEADER include/spdk/dma.h 00:03:17.109 TEST_HEADER include/spdk/endian.h 00:03:17.109 TEST_HEADER include/spdk/env_dpdk.h 00:03:17.109 TEST_HEADER include/spdk/env.h 00:03:17.109 TEST_HEADER include/spdk/event.h 00:03:17.109 TEST_HEADER include/spdk/fd_group.h 00:03:17.109 CC examples/ioat/perf/perf.o 00:03:17.109 TEST_HEADER include/spdk/fd.h 00:03:17.109 TEST_HEADER include/spdk/file.h 00:03:17.109 TEST_HEADER include/spdk/fsdev.h 00:03:17.109 TEST_HEADER include/spdk/fsdev_module.h 00:03:17.109 TEST_HEADER include/spdk/ftl.h 00:03:17.109 TEST_HEADER include/spdk/gpt_spec.h 00:03:17.109 TEST_HEADER include/spdk/hexlify.h 00:03:17.109 TEST_HEADER include/spdk/histogram_data.h 00:03:17.109 TEST_HEADER include/spdk/idxd.h 00:03:17.109 TEST_HEADER include/spdk/idxd_spec.h 00:03:17.109 TEST_HEADER include/spdk/init.h 00:03:17.109 TEST_HEADER include/spdk/ioat.h 00:03:17.109 TEST_HEADER include/spdk/ioat_spec.h 00:03:17.109 TEST_HEADER include/spdk/iscsi_spec.h 00:03:17.109 TEST_HEADER include/spdk/json.h 00:03:17.109 TEST_HEADER include/spdk/jsonrpc.h 00:03:17.109 TEST_HEADER include/spdk/keyring.h 00:03:17.109 TEST_HEADER include/spdk/keyring_module.h 00:03:17.109 TEST_HEADER include/spdk/likely.h 00:03:17.109 TEST_HEADER include/spdk/log.h 00:03:17.109 LINK spdk_nvme_discover 00:03:17.109 TEST_HEADER include/spdk/lvol.h 00:03:17.109 TEST_HEADER include/spdk/md5.h 00:03:17.109 TEST_HEADER include/spdk/memory.h 00:03:17.109 CC app/spdk_dd/spdk_dd.o 00:03:17.109 TEST_HEADER include/spdk/mmio.h 00:03:17.109 CC test/dma/test_dma/test_dma.o 00:03:17.109 TEST_HEADER include/spdk/nbd.h 00:03:17.109 CC test/app/bdev_svc/bdev_svc.o 00:03:17.109 TEST_HEADER include/spdk/net.h 00:03:17.109 TEST_HEADER include/spdk/notify.h 00:03:17.109 TEST_HEADER include/spdk/nvme.h 00:03:17.109 TEST_HEADER include/spdk/nvme_intel.h 00:03:17.109 TEST_HEADER include/spdk/nvme_ocssd.h 00:03:17.109 TEST_HEADER include/spdk/nvme_ocssd_spec.h 00:03:17.109 TEST_HEADER include/spdk/nvme_spec.h 00:03:17.109 TEST_HEADER include/spdk/nvme_zns.h 00:03:17.109 TEST_HEADER include/spdk/nvmf_cmd.h 00:03:17.109 TEST_HEADER include/spdk/nvmf_fc_spec.h 00:03:17.109 TEST_HEADER include/spdk/nvmf.h 00:03:17.109 TEST_HEADER include/spdk/nvmf_spec.h 00:03:17.109 TEST_HEADER include/spdk/nvmf_transport.h 00:03:17.109 TEST_HEADER include/spdk/opal.h 00:03:17.109 TEST_HEADER include/spdk/opal_spec.h 00:03:17.109 TEST_HEADER include/spdk/pci_ids.h 00:03:17.109 TEST_HEADER include/spdk/pipe.h 00:03:17.109 TEST_HEADER include/spdk/queue.h 00:03:17.109 CC examples/ioat/verify/verify.o 00:03:17.109 TEST_HEADER include/spdk/reduce.h 00:03:17.109 TEST_HEADER include/spdk/rpc.h 00:03:17.109 TEST_HEADER include/spdk/scheduler.h 00:03:17.109 TEST_HEADER include/spdk/scsi.h 00:03:17.109 TEST_HEADER include/spdk/scsi_spec.h 00:03:17.109 TEST_HEADER include/spdk/sock.h 00:03:17.109 TEST_HEADER include/spdk/stdinc.h 00:03:17.109 TEST_HEADER include/spdk/string.h 00:03:17.109 TEST_HEADER include/spdk/thread.h 00:03:17.109 TEST_HEADER include/spdk/trace.h 00:03:17.109 TEST_HEADER include/spdk/trace_parser.h 00:03:17.109 TEST_HEADER include/spdk/tree.h 00:03:17.109 TEST_HEADER include/spdk/ublk.h 00:03:17.109 TEST_HEADER include/spdk/util.h 00:03:17.109 TEST_HEADER include/spdk/uuid.h 00:03:17.109 TEST_HEADER include/spdk/version.h 00:03:17.109 TEST_HEADER include/spdk/vfio_user_pci.h 00:03:17.109 TEST_HEADER include/spdk/vfio_user_spec.h 00:03:17.109 TEST_HEADER include/spdk/vhost.h 00:03:17.109 TEST_HEADER include/spdk/vmd.h 00:03:17.109 TEST_HEADER include/spdk/xor.h 00:03:17.109 TEST_HEADER include/spdk/zipf.h 00:03:17.109 CXX test/cpp_headers/accel.o 00:03:17.367 LINK ioat_perf 00:03:17.367 CXX test/cpp_headers/accel_module.o 00:03:17.367 LINK bdev_svc 00:03:17.367 LINK verify 00:03:17.367 LINK spdk_nvme_identify 00:03:17.367 CXX test/cpp_headers/assert.o 00:03:17.367 CC test/app/fuzz/nvme_fuzz/nvme_fuzz.o 00:03:17.367 LINK spdk_dd 00:03:17.367 CXX test/cpp_headers/barrier.o 00:03:17.367 LINK spdk_nvme_perf 00:03:17.625 CXX test/cpp_headers/base64.o 00:03:17.625 LINK test_dma 00:03:17.626 CC examples/vmd/lsvmd/lsvmd.o 00:03:17.626 CC examples/idxd/perf/perf.o 00:03:17.626 CC test/event/event_perf/event_perf.o 00:03:17.626 CC test/app/fuzz/iscsi_fuzz/iscsi_fuzz.o 00:03:17.626 CC test/env/mem_callbacks/mem_callbacks.o 00:03:17.626 CXX test/cpp_headers/bdev.o 00:03:17.626 CC app/fio/nvme/fio_plugin.o 00:03:17.626 LINK lsvmd 00:03:17.884 CC test/env/vtophys/vtophys.o 00:03:17.884 LINK nvme_fuzz 00:03:17.884 LINK event_perf 00:03:17.884 CXX test/cpp_headers/bdev_module.o 00:03:17.884 LINK idxd_perf 00:03:17.884 LINK vtophys 00:03:17.884 LINK spdk_top 00:03:17.884 CC examples/vmd/led/led.o 00:03:17.884 CXX test/cpp_headers/bdev_zone.o 00:03:17.884 CC test/event/reactor/reactor.o 00:03:18.142 CC test/app/histogram_perf/histogram_perf.o 00:03:18.142 CXX test/cpp_headers/bit_array.o 00:03:18.142 CXX test/cpp_headers/bit_pool.o 00:03:18.142 LINK led 00:03:18.142 CC test/app/jsoncat/jsoncat.o 00:03:18.142 CC test/app/stub/stub.o 00:03:18.142 LINK reactor 00:03:18.142 LINK histogram_perf 00:03:18.142 LINK mem_callbacks 00:03:18.142 CXX test/cpp_headers/blob_bdev.o 00:03:18.142 LINK jsoncat 00:03:18.142 LINK stub 00:03:18.142 LINK spdk_nvme 00:03:18.142 CC app/vhost/vhost.o 00:03:18.402 CC examples/interrupt_tgt/interrupt_tgt.o 00:03:18.402 CC test/event/reactor_perf/reactor_perf.o 00:03:18.402 CXX test/cpp_headers/blobfs_bdev.o 00:03:18.402 CC test/env/env_dpdk_post_init/env_dpdk_post_init.o 00:03:18.402 CC test/env/pci/pci_ut.o 00:03:18.402 CC test/env/memory/memory_ut.o 00:03:18.402 CC app/fio/bdev/fio_plugin.o 00:03:18.402 CC test/app/fuzz/vhost_fuzz/vhost_fuzz_rpc.o 00:03:18.402 LINK vhost 00:03:18.402 LINK reactor_perf 00:03:18.402 LINK interrupt_tgt 00:03:18.402 LINK env_dpdk_post_init 00:03:18.402 CXX test/cpp_headers/blobfs.o 00:03:18.661 CC test/app/fuzz/vhost_fuzz/vhost_fuzz.o 00:03:18.661 CXX test/cpp_headers/blob.o 00:03:18.661 CC test/event/app_repeat/app_repeat.o 00:03:18.661 CXX test/cpp_headers/conf.o 00:03:18.661 CC test/event/scheduler/scheduler.o 00:03:18.661 LINK pci_ut 00:03:18.661 CXX test/cpp_headers/config.o 00:03:18.661 CXX test/cpp_headers/cpuset.o 00:03:18.661 LINK app_repeat 00:03:18.661 LINK spdk_bdev 00:03:18.919 CC examples/thread/thread/thread_ex.o 00:03:18.919 LINK scheduler 00:03:18.919 CXX test/cpp_headers/crc16.o 00:03:18.919 CXX test/cpp_headers/crc32.o 00:03:18.919 CXX test/cpp_headers/crc64.o 00:03:18.919 CC examples/sock/hello_world/hello_sock.o 00:03:18.919 LINK vhost_fuzz 00:03:18.919 CC test/rpc_client/rpc_client_test.o 00:03:18.919 CXX test/cpp_headers/dif.o 00:03:18.919 LINK thread 00:03:18.919 CXX test/cpp_headers/dma.o 00:03:19.178 CXX test/cpp_headers/endian.o 00:03:19.178 LINK rpc_client_test 00:03:19.178 CXX test/cpp_headers/env_dpdk.o 00:03:19.178 LINK hello_sock 00:03:19.178 CC test/accel/dif/dif.o 00:03:19.178 CC test/blobfs/mkfs/mkfs.o 00:03:19.178 CXX test/cpp_headers/env.o 00:03:19.178 CC test/nvme/aer/aer.o 00:03:19.178 CC test/nvme/reset/reset.o 00:03:19.436 CC test/lvol/esnap/esnap.o 00:03:19.436 CC examples/accel/perf/accel_perf.o 00:03:19.436 CC test/nvme/sgl/sgl.o 00:03:19.437 LINK iscsi_fuzz 00:03:19.437 LINK mkfs 00:03:19.437 CXX test/cpp_headers/event.o 00:03:19.437 LINK memory_ut 00:03:19.437 LINK reset 00:03:19.437 LINK aer 00:03:19.437 CXX test/cpp_headers/fd_group.o 00:03:19.437 LINK sgl 00:03:19.437 CXX test/cpp_headers/fd.o 00:03:19.437 CXX test/cpp_headers/file.o 00:03:19.695 CC test/nvme/e2edp/nvme_dp.o 00:03:19.695 CXX test/cpp_headers/fsdev.o 00:03:19.695 CC examples/nvme/hello_world/hello_world.o 00:03:19.695 CC examples/blob/hello_world/hello_blob.o 00:03:19.695 CC examples/blob/cli/blobcli.o 00:03:19.695 CC examples/nvme/reconnect/reconnect.o 00:03:19.695 CC test/nvme/overhead/overhead.o 00:03:19.695 CXX test/cpp_headers/fsdev_module.o 00:03:19.695 LINK accel_perf 00:03:19.954 LINK nvme_dp 00:03:19.954 LINK dif 00:03:19.954 LINK hello_world 00:03:19.954 CXX test/cpp_headers/ftl.o 00:03:19.954 LINK hello_blob 00:03:19.954 LINK overhead 00:03:20.212 CC examples/nvme/nvme_manage/nvme_manage.o 00:03:20.212 CXX test/cpp_headers/gpt_spec.o 00:03:20.212 LINK reconnect 00:03:20.212 CC examples/fsdev/hello_world/hello_fsdev.o 00:03:20.212 CC test/nvme/err_injection/err_injection.o 00:03:20.212 CC examples/bdev/hello_world/hello_bdev.o 00:03:20.212 CXX test/cpp_headers/hexlify.o 00:03:20.212 CC test/bdev/bdevio/bdevio.o 00:03:20.212 LINK blobcli 00:03:20.212 LINK err_injection 00:03:20.212 CC examples/nvme/arbitration/arbitration.o 00:03:20.471 CXX test/cpp_headers/histogram_data.o 00:03:20.471 LINK hello_fsdev 00:03:20.471 CC examples/nvme/hotplug/hotplug.o 00:03:20.471 LINK hello_bdev 00:03:20.471 CXX test/cpp_headers/idxd.o 00:03:20.471 CC examples/nvme/cmb_copy/cmb_copy.o 00:03:20.471 LINK bdevio 00:03:20.471 CC test/nvme/startup/startup.o 00:03:20.471 LINK hotplug 00:03:20.471 CC examples/nvme/abort/abort.o 00:03:20.471 LINK nvme_manage 00:03:20.730 CXX test/cpp_headers/idxd_spec.o 00:03:20.730 LINK arbitration 00:03:20.730 CXX test/cpp_headers/init.o 00:03:20.730 LINK startup 00:03:20.730 CC examples/bdev/bdevperf/bdevperf.o 00:03:20.730 LINK cmb_copy 00:03:20.730 CC examples/nvme/pmr_persistence/pmr_persistence.o 00:03:20.730 CXX test/cpp_headers/ioat.o 00:03:20.730 CXX test/cpp_headers/ioat_spec.o 00:03:20.730 CC test/nvme/reserve/reserve.o 00:03:20.730 CC test/nvme/simple_copy/simple_copy.o 00:03:20.730 LINK pmr_persistence 00:03:20.730 CC test/nvme/connect_stress/connect_stress.o 00:03:20.988 CC test/nvme/boot_partition/boot_partition.o 00:03:20.988 CXX test/cpp_headers/iscsi_spec.o 00:03:20.988 CXX test/cpp_headers/json.o 00:03:20.988 LINK abort 00:03:20.988 CXX test/cpp_headers/jsonrpc.o 00:03:20.988 LINK reserve 00:03:20.988 CXX test/cpp_headers/keyring.o 00:03:20.988 LINK boot_partition 00:03:20.988 LINK simple_copy 00:03:20.988 LINK connect_stress 00:03:20.988 CXX test/cpp_headers/keyring_module.o 00:03:21.279 CC test/nvme/compliance/nvme_compliance.o 00:03:21.279 CC test/nvme/fused_ordering/fused_ordering.o 00:03:21.279 CXX test/cpp_headers/likely.o 00:03:21.279 CXX test/cpp_headers/log.o 00:03:21.279 CC test/nvme/doorbell_aers/doorbell_aers.o 00:03:21.279 CC test/nvme/cuse/cuse.o 00:03:21.279 CC test/nvme/fdp/fdp.o 00:03:21.279 CXX test/cpp_headers/lvol.o 00:03:21.279 CXX test/cpp_headers/md5.o 00:03:21.279 LINK doorbell_aers 00:03:21.279 CXX test/cpp_headers/memory.o 00:03:21.279 LINK fused_ordering 00:03:21.279 CXX test/cpp_headers/mmio.o 00:03:21.279 CXX test/cpp_headers/nbd.o 00:03:21.279 LINK nvme_compliance 00:03:21.279 CXX test/cpp_headers/net.o 00:03:21.279 CXX test/cpp_headers/notify.o 00:03:21.536 CXX test/cpp_headers/nvme.o 00:03:21.536 LINK bdevperf 00:03:21.536 CXX test/cpp_headers/nvme_intel.o 00:03:21.536 CXX test/cpp_headers/nvme_ocssd.o 00:03:21.536 CXX test/cpp_headers/nvme_ocssd_spec.o 00:03:21.536 CXX test/cpp_headers/nvme_spec.o 00:03:21.536 CXX test/cpp_headers/nvme_zns.o 00:03:21.536 LINK fdp 00:03:21.536 CXX test/cpp_headers/nvmf_cmd.o 00:03:21.536 CXX test/cpp_headers/nvmf_fc_spec.o 00:03:21.536 CXX test/cpp_headers/nvmf.o 00:03:21.536 CXX test/cpp_headers/nvmf_spec.o 00:03:21.794 CXX test/cpp_headers/nvmf_transport.o 00:03:21.794 CXX test/cpp_headers/opal.o 00:03:21.794 CXX test/cpp_headers/opal_spec.o 00:03:21.794 CXX test/cpp_headers/pci_ids.o 00:03:21.794 CXX test/cpp_headers/queue.o 00:03:21.794 CXX test/cpp_headers/pipe.o 00:03:21.794 CC examples/nvmf/nvmf/nvmf.o 00:03:21.794 CXX test/cpp_headers/reduce.o 00:03:21.794 CXX test/cpp_headers/rpc.o 00:03:21.794 CXX test/cpp_headers/scheduler.o 00:03:21.794 CXX test/cpp_headers/scsi.o 00:03:21.794 CXX test/cpp_headers/scsi_spec.o 00:03:21.794 CXX test/cpp_headers/sock.o 00:03:21.794 CXX test/cpp_headers/stdinc.o 00:03:21.794 CXX test/cpp_headers/string.o 00:03:21.794 CXX test/cpp_headers/thread.o 00:03:21.794 CXX test/cpp_headers/trace.o 00:03:22.053 CXX test/cpp_headers/trace_parser.o 00:03:22.053 LINK nvmf 00:03:22.053 CXX test/cpp_headers/tree.o 00:03:22.053 CXX test/cpp_headers/ublk.o 00:03:22.053 CXX test/cpp_headers/util.o 00:03:22.053 CXX test/cpp_headers/uuid.o 00:03:22.053 CXX test/cpp_headers/version.o 00:03:22.053 CXX test/cpp_headers/vfio_user_pci.o 00:03:22.053 CXX test/cpp_headers/vfio_user_spec.o 00:03:22.053 CXX test/cpp_headers/vhost.o 00:03:22.053 CXX test/cpp_headers/vmd.o 00:03:22.053 CXX test/cpp_headers/xor.o 00:03:22.053 CXX test/cpp_headers/zipf.o 00:03:22.311 LINK cuse 00:03:23.685 LINK esnap 00:03:23.943 00:03:23.943 real 1m4.527s 00:03:23.943 user 5m58.867s 00:03:23.943 sys 1m3.267s 00:03:23.943 01:58:48 make -- common/autotest_common.sh@1130 -- $ xtrace_disable 00:03:23.943 01:58:48 make -- common/autotest_common.sh@10 -- $ set +x 00:03:23.943 ************************************ 00:03:23.943 END TEST make 00:03:23.943 ************************************ 00:03:23.943 01:58:48 -- spdk/autobuild.sh@1 -- $ stop_monitor_resources 00:03:23.943 01:58:48 -- pm/common@29 -- $ signal_monitor_resources TERM 00:03:23.943 01:58:48 -- pm/common@40 -- $ local monitor pid pids signal=TERM 00:03:23.943 01:58:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.943 01:58:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-cpu-load.pid ]] 00:03:23.943 01:58:48 -- pm/common@44 -- $ pid=5072 00:03:23.943 01:58:48 -- pm/common@50 -- $ kill -TERM 5072 00:03:23.943 01:58:48 -- pm/common@42 -- $ for monitor in "${MONITOR_RESOURCES[@]}" 00:03:23.943 01:58:48 -- pm/common@43 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/power/collect-vmstat.pid ]] 00:03:23.943 01:58:48 -- pm/common@44 -- $ pid=5073 00:03:23.943 01:58:48 -- pm/common@50 -- $ kill -TERM 5073 00:03:23.943 01:58:48 -- spdk/autorun.sh@26 -- $ (( SPDK_TEST_UNITTEST == 1 || SPDK_RUN_FUNCTIONAL_TEST == 1 )) 00:03:23.943 01:58:48 -- spdk/autorun.sh@27 -- $ sudo -E /home/vagrant/spdk_repo/spdk/autotest.sh /home/vagrant/spdk_repo/autorun-spdk.conf 00:03:24.203 01:58:48 -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:03:24.203 01:58:48 -- common/autotest_common.sh@1711 -- # lcov --version 00:03:24.203 01:58:48 -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:03:24.203 01:58:48 -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:03:24.203 01:58:48 -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:03:24.203 01:58:48 -- scripts/common.sh@333 -- # local ver1 ver1_l 00:03:24.203 01:58:48 -- scripts/common.sh@334 -- # local ver2 ver2_l 00:03:24.203 01:58:48 -- scripts/common.sh@336 -- # IFS=.-: 00:03:24.203 01:58:48 -- scripts/common.sh@336 -- # read -ra ver1 00:03:24.203 01:58:48 -- scripts/common.sh@337 -- # IFS=.-: 00:03:24.203 01:58:48 -- scripts/common.sh@337 -- # read -ra ver2 00:03:24.203 01:58:48 -- scripts/common.sh@338 -- # local 'op=<' 00:03:24.203 01:58:48 -- scripts/common.sh@340 -- # ver1_l=2 00:03:24.203 01:58:48 -- scripts/common.sh@341 -- # ver2_l=1 00:03:24.203 01:58:48 -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:03:24.203 01:58:48 -- scripts/common.sh@344 -- # case "$op" in 00:03:24.203 01:58:48 -- scripts/common.sh@345 -- # : 1 00:03:24.203 01:58:48 -- scripts/common.sh@364 -- # (( v = 0 )) 00:03:24.203 01:58:48 -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:03:24.203 01:58:48 -- scripts/common.sh@365 -- # decimal 1 00:03:24.203 01:58:48 -- scripts/common.sh@353 -- # local d=1 00:03:24.203 01:58:48 -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:03:24.203 01:58:48 -- scripts/common.sh@355 -- # echo 1 00:03:24.203 01:58:48 -- scripts/common.sh@365 -- # ver1[v]=1 00:03:24.203 01:58:48 -- scripts/common.sh@366 -- # decimal 2 00:03:24.203 01:58:48 -- scripts/common.sh@353 -- # local d=2 00:03:24.203 01:58:48 -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:03:24.203 01:58:48 -- scripts/common.sh@355 -- # echo 2 00:03:24.203 01:58:48 -- scripts/common.sh@366 -- # ver2[v]=2 00:03:24.203 01:58:48 -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:03:24.203 01:58:48 -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:03:24.203 01:58:48 -- scripts/common.sh@368 -- # return 0 00:03:24.203 01:58:48 -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:03:24.203 01:58:48 -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:03:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.203 --rc genhtml_branch_coverage=1 00:03:24.203 --rc genhtml_function_coverage=1 00:03:24.203 --rc genhtml_legend=1 00:03:24.203 --rc geninfo_all_blocks=1 00:03:24.203 --rc geninfo_unexecuted_blocks=1 00:03:24.203 00:03:24.203 ' 00:03:24.203 01:58:48 -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:03:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.203 --rc genhtml_branch_coverage=1 00:03:24.203 --rc genhtml_function_coverage=1 00:03:24.203 --rc genhtml_legend=1 00:03:24.203 --rc geninfo_all_blocks=1 00:03:24.203 --rc geninfo_unexecuted_blocks=1 00:03:24.203 00:03:24.203 ' 00:03:24.203 01:58:48 -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:03:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.203 --rc genhtml_branch_coverage=1 00:03:24.203 --rc genhtml_function_coverage=1 00:03:24.203 --rc genhtml_legend=1 00:03:24.203 --rc geninfo_all_blocks=1 00:03:24.203 --rc geninfo_unexecuted_blocks=1 00:03:24.203 00:03:24.203 ' 00:03:24.203 01:58:48 -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:03:24.203 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:03:24.203 --rc genhtml_branch_coverage=1 00:03:24.203 --rc genhtml_function_coverage=1 00:03:24.203 --rc genhtml_legend=1 00:03:24.203 --rc geninfo_all_blocks=1 00:03:24.203 --rc geninfo_unexecuted_blocks=1 00:03:24.203 00:03:24.203 ' 00:03:24.203 01:58:48 -- spdk/autotest.sh@25 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:03:24.203 01:58:48 -- nvmf/common.sh@7 -- # uname -s 00:03:24.203 01:58:48 -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:03:24.203 01:58:48 -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:03:24.203 01:58:48 -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:03:24.203 01:58:48 -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:03:24.203 01:58:48 -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:03:24.203 01:58:48 -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:03:24.203 01:58:48 -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:03:24.203 01:58:48 -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:03:24.203 01:58:48 -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:03:24.203 01:58:48 -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:03:24.203 01:58:48 -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fb2ab7fd-1bf2-4a37-bf80-a2d36b143c94 00:03:24.203 01:58:48 -- nvmf/common.sh@18 -- # NVME_HOSTID=fb2ab7fd-1bf2-4a37-bf80-a2d36b143c94 00:03:24.203 01:58:48 -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:03:24.203 01:58:48 -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:03:24.203 01:58:48 -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:03:24.203 01:58:48 -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:03:24.203 01:58:48 -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:03:24.203 01:58:48 -- scripts/common.sh@15 -- # shopt -s extglob 00:03:24.203 01:58:48 -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:03:24.203 01:58:48 -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:03:24.203 01:58:48 -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:03:24.203 01:58:48 -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.203 01:58:48 -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.203 01:58:48 -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.203 01:58:48 -- paths/export.sh@5 -- # export PATH 00:03:24.203 01:58:48 -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:03:24.203 01:58:48 -- nvmf/common.sh@51 -- # : 0 00:03:24.203 01:58:48 -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:03:24.203 01:58:48 -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:03:24.203 01:58:48 -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:03:24.203 01:58:48 -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:03:24.203 01:58:48 -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:03:24.203 01:58:48 -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:03:24.203 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:03:24.203 01:58:48 -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:03:24.203 01:58:48 -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:03:24.203 01:58:48 -- nvmf/common.sh@55 -- # have_pci_nics=0 00:03:24.203 01:58:48 -- spdk/autotest.sh@27 -- # '[' 0 -ne 0 ']' 00:03:24.203 01:58:48 -- spdk/autotest.sh@32 -- # uname -s 00:03:24.203 01:58:48 -- spdk/autotest.sh@32 -- # '[' Linux = Linux ']' 00:03:24.203 01:58:48 -- spdk/autotest.sh@33 -- # old_core_pattern='|/usr/lib/systemd/systemd-coredump %P %u %g %s %t %c %h' 00:03:24.203 01:58:48 -- spdk/autotest.sh@34 -- # mkdir -p /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:24.203 01:58:48 -- spdk/autotest.sh@39 -- # echo '|/home/vagrant/spdk_repo/spdk/scripts/core-collector.sh %P %s %t' 00:03:24.203 01:58:48 -- spdk/autotest.sh@40 -- # echo /home/vagrant/spdk_repo/spdk/../output/coredumps 00:03:24.203 01:58:48 -- spdk/autotest.sh@44 -- # modprobe nbd 00:03:24.203 01:58:48 -- spdk/autotest.sh@46 -- # type -P udevadm 00:03:24.203 01:58:48 -- spdk/autotest.sh@46 -- # udevadm=/usr/sbin/udevadm 00:03:24.203 01:58:48 -- spdk/autotest.sh@48 -- # udevadm_pid=56017 00:03:24.203 01:58:48 -- spdk/autotest.sh@53 -- # start_monitor_resources 00:03:24.203 01:58:48 -- pm/common@17 -- # local monitor 00:03:24.203 01:58:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.203 01:58:48 -- pm/common@19 -- # for monitor in "${MONITOR_RESOURCES[@]}" 00:03:24.203 01:58:48 -- spdk/autotest.sh@47 -- # /usr/sbin/udevadm monitor --property 00:03:24.203 01:58:48 -- pm/common@25 -- # sleep 1 00:03:24.203 01:58:48 -- pm/common@21 -- # date +%s 00:03:24.203 01:58:48 -- pm/common@21 -- # date +%s 00:03:24.203 01:58:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-vmstat -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734227928 00:03:24.204 01:58:48 -- pm/common@21 -- # /home/vagrant/spdk_repo/spdk/scripts/perf/pm/collect-cpu-load -d /home/vagrant/spdk_repo/spdk/../output/power -l -p monitor.autotest.sh.1734227928 00:03:24.204 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734227928_collect-cpu-load.pm.log 00:03:24.204 Redirecting to /home/vagrant/spdk_repo/spdk/../output/power/monitor.autotest.sh.1734227928_collect-vmstat.pm.log 00:03:25.143 01:58:49 -- spdk/autotest.sh@55 -- # trap 'autotest_cleanup || :; exit 1' SIGINT SIGTERM EXIT 00:03:25.143 01:58:49 -- spdk/autotest.sh@57 -- # timing_enter autotest 00:03:25.143 01:58:49 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:25.143 01:58:49 -- common/autotest_common.sh@10 -- # set +x 00:03:25.143 01:58:49 -- spdk/autotest.sh@59 -- # create_test_list 00:03:25.143 01:58:49 -- common/autotest_common.sh@752 -- # xtrace_disable 00:03:25.143 01:58:49 -- common/autotest_common.sh@10 -- # set +x 00:03:25.403 01:58:49 -- spdk/autotest.sh@61 -- # dirname /home/vagrant/spdk_repo/spdk/autotest.sh 00:03:25.403 01:58:49 -- spdk/autotest.sh@61 -- # readlink -f /home/vagrant/spdk_repo/spdk 00:03:25.403 01:58:49 -- spdk/autotest.sh@61 -- # src=/home/vagrant/spdk_repo/spdk 00:03:25.403 01:58:49 -- spdk/autotest.sh@62 -- # out=/home/vagrant/spdk_repo/spdk/../output 00:03:25.403 01:58:49 -- spdk/autotest.sh@63 -- # cd /home/vagrant/spdk_repo/spdk 00:03:25.403 01:58:49 -- spdk/autotest.sh@65 -- # freebsd_update_contigmem_mod 00:03:25.403 01:58:49 -- common/autotest_common.sh@1457 -- # uname 00:03:25.403 01:58:49 -- common/autotest_common.sh@1457 -- # '[' Linux = FreeBSD ']' 00:03:25.403 01:58:49 -- spdk/autotest.sh@66 -- # freebsd_set_maxsock_buf 00:03:25.403 01:58:49 -- common/autotest_common.sh@1477 -- # uname 00:03:25.403 01:58:49 -- common/autotest_common.sh@1477 -- # [[ Linux = FreeBSD ]] 00:03:25.404 01:58:49 -- spdk/autotest.sh@68 -- # [[ y == y ]] 00:03:25.404 01:58:49 -- spdk/autotest.sh@70 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 --version 00:03:25.404 lcov: LCOV version 1.15 00:03:25.404 01:58:49 -- spdk/autotest.sh@72 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -i -t Baseline -d /home/vagrant/spdk_repo/spdk -o /home/vagrant/spdk_repo/spdk/../output/cov_base.info 00:03:40.317 /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno:no functions found 00:03:40.317 geninfo: WARNING: GCOV did not produce any data for /home/vagrant/spdk_repo/spdk/lib/nvme/nvme_stubs.gcno 00:03:55.224 01:59:18 -- spdk/autotest.sh@76 -- # timing_enter pre_cleanup 00:03:55.224 01:59:18 -- common/autotest_common.sh@726 -- # xtrace_disable 00:03:55.224 01:59:18 -- common/autotest_common.sh@10 -- # set +x 00:03:55.224 01:59:18 -- spdk/autotest.sh@78 -- # rm -f 00:03:55.224 01:59:18 -- spdk/autotest.sh@81 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:03:55.224 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:55.224 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:03:55.224 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:03:55.486 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:03:55.486 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:03:55.486 01:59:20 -- spdk/autotest.sh@83 -- # get_zoned_devs 00:03:55.486 01:59:20 -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:03:55.486 01:59:20 -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:03:55.486 01:59:20 -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:03:55.486 01:59:20 -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:03:55.486 01:59:20 -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:03:55.486 01:59:20 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:03:55.486 01:59:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:03:55.486 01:59:20 -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:03:55.486 01:59:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:03:55.486 01:59:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:03:55.486 01:59:20 -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:03:55.486 01:59:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n2 00:03:55.486 01:59:20 -- common/autotest_common.sh@1650 -- # local device=nvme1n2 00:03:55.486 01:59:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n2/queue/zoned ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n3 00:03:55.486 01:59:20 -- common/autotest_common.sh@1650 -- # local device=nvme1n3 00:03:55.486 01:59:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n3/queue/zoned ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:03:55.486 01:59:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:03:55.486 01:59:20 -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:03:55.486 01:59:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:03:55.486 01:59:20 -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:03:55.486 01:59:20 -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:03:55.486 01:59:20 -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:03:55.486 01:59:20 -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:03:55.486 01:59:20 -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:03:55.486 01:59:20 -- spdk/autotest.sh@85 -- # (( 0 > 0 )) 00:03:55.486 01:59:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.486 01:59:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.486 01:59:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme0n1 00:03:55.486 01:59:20 -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:03:55.486 01:59:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:03:55.486 No valid GPT data, bailing 00:03:55.486 01:59:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:03:55.486 01:59:20 -- scripts/common.sh@394 -- # pt= 00:03:55.486 01:59:20 -- scripts/common.sh@395 -- # return 1 00:03:55.486 01:59:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme0n1 bs=1M count=1 00:03:55.486 1+0 records in 00:03:55.486 1+0 records out 00:03:55.486 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0288816 s, 36.3 MB/s 00:03:55.486 01:59:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.486 01:59:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.486 01:59:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n1 00:03:55.486 01:59:20 -- scripts/common.sh@381 -- # local block=/dev/nvme1n1 pt 00:03:55.486 01:59:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n1 00:03:55.486 No valid GPT data, bailing 00:03:55.486 01:59:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n1 00:03:55.486 01:59:20 -- scripts/common.sh@394 -- # pt= 00:03:55.486 01:59:20 -- scripts/common.sh@395 -- # return 1 00:03:55.486 01:59:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n1 bs=1M count=1 00:03:55.748 1+0 records in 00:03:55.748 1+0 records out 00:03:55.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00603268 s, 174 MB/s 00:03:55.748 01:59:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.748 01:59:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.748 01:59:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n2 00:03:55.748 01:59:20 -- scripts/common.sh@381 -- # local block=/dev/nvme1n2 pt 00:03:55.748 01:59:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n2 00:03:55.748 No valid GPT data, bailing 00:03:55.748 01:59:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n2 00:03:55.748 01:59:20 -- scripts/common.sh@394 -- # pt= 00:03:55.748 01:59:20 -- scripts/common.sh@395 -- # return 1 00:03:55.748 01:59:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n2 bs=1M count=1 00:03:55.748 1+0 records in 00:03:55.748 1+0 records out 00:03:55.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00598702 s, 175 MB/s 00:03:55.748 01:59:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.748 01:59:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.748 01:59:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme1n3 00:03:55.748 01:59:20 -- scripts/common.sh@381 -- # local block=/dev/nvme1n3 pt 00:03:55.748 01:59:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme1n3 00:03:55.748 No valid GPT data, bailing 00:03:55.748 01:59:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme1n3 00:03:55.748 01:59:20 -- scripts/common.sh@394 -- # pt= 00:03:55.748 01:59:20 -- scripts/common.sh@395 -- # return 1 00:03:55.748 01:59:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme1n3 bs=1M count=1 00:03:55.748 1+0 records in 00:03:55.748 1+0 records out 00:03:55.748 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00460725 s, 228 MB/s 00:03:55.748 01:59:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:55.748 01:59:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:55.748 01:59:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme2n1 00:03:55.748 01:59:20 -- scripts/common.sh@381 -- # local block=/dev/nvme2n1 pt 00:03:55.748 01:59:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme2n1 00:03:55.748 No valid GPT data, bailing 00:03:55.748 01:59:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme2n1 00:03:55.748 01:59:20 -- scripts/common.sh@394 -- # pt= 00:03:55.748 01:59:20 -- scripts/common.sh@395 -- # return 1 00:03:55.748 01:59:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme2n1 bs=1M count=1 00:03:56.009 1+0 records in 00:03:56.009 1+0 records out 00:03:56.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00607855 s, 173 MB/s 00:03:56.009 01:59:20 -- spdk/autotest.sh@97 -- # for dev in /dev/nvme*n!(*p*) 00:03:56.009 01:59:20 -- spdk/autotest.sh@99 -- # [[ -z '' ]] 00:03:56.009 01:59:20 -- spdk/autotest.sh@100 -- # block_in_use /dev/nvme3n1 00:03:56.009 01:59:20 -- scripts/common.sh@381 -- # local block=/dev/nvme3n1 pt 00:03:56.009 01:59:20 -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme3n1 00:03:56.009 No valid GPT data, bailing 00:03:56.009 01:59:20 -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme3n1 00:03:56.009 01:59:20 -- scripts/common.sh@394 -- # pt= 00:03:56.009 01:59:20 -- scripts/common.sh@395 -- # return 1 00:03:56.009 01:59:20 -- spdk/autotest.sh@101 -- # dd if=/dev/zero of=/dev/nvme3n1 bs=1M count=1 00:03:56.009 1+0 records in 00:03:56.009 1+0 records out 00:03:56.009 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00581169 s, 180 MB/s 00:03:56.009 01:59:20 -- spdk/autotest.sh@105 -- # sync 00:03:56.009 01:59:20 -- spdk/autotest.sh@107 -- # xtrace_disable_per_cmd reap_spdk_processes 00:03:56.009 01:59:20 -- common/autotest_common.sh@22 -- # eval 'reap_spdk_processes 12> /dev/null' 00:03:56.009 01:59:20 -- common/autotest_common.sh@22 -- # reap_spdk_processes 00:03:57.951 01:59:22 -- spdk/autotest.sh@111 -- # uname -s 00:03:57.951 01:59:22 -- spdk/autotest.sh@111 -- # [[ Linux == Linux ]] 00:03:57.951 01:59:22 -- spdk/autotest.sh@111 -- # [[ 0 -eq 1 ]] 00:03:57.951 01:59:22 -- spdk/autotest.sh@115 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh status 00:03:58.213 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:03:58.785 Hugepages 00:03:58.785 node hugesize free / total 00:03:58.785 node0 1048576kB 0 / 0 00:03:58.785 node0 2048kB 0 / 0 00:03:58.785 00:03:58.785 Type BDF Vendor Device NUMA Driver Device Block devices 00:03:58.785 virtio 0000:00:03.0 1af4 1001 unknown virtio-pci - vda 00:03:58.785 NVMe 0000:00:10.0 1b36 0010 unknown nvme nvme0 nvme0n1 00:03:59.046 NVMe 0000:00:11.0 1b36 0010 unknown nvme nvme2 nvme2n1 00:03:59.046 NVMe 0000:00:12.0 1b36 0010 unknown nvme nvme1 nvme1n1 nvme1n2 nvme1n3 00:03:59.046 NVMe 0000:00:13.0 1b36 0010 unknown nvme nvme3 nvme3n1 00:03:59.046 01:59:23 -- spdk/autotest.sh@117 -- # uname -s 00:03:59.046 01:59:23 -- spdk/autotest.sh@117 -- # [[ Linux == Linux ]] 00:03:59.046 01:59:23 -- spdk/autotest.sh@119 -- # nvme_namespace_revert 00:03:59.046 01:59:23 -- common/autotest_common.sh@1516 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:03:59.617 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:00.189 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.189 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.189 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.189 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:00.189 01:59:24 -- common/autotest_common.sh@1517 -- # sleep 1 00:04:01.573 01:59:25 -- common/autotest_common.sh@1518 -- # bdfs=() 00:04:01.573 01:59:25 -- common/autotest_common.sh@1518 -- # local bdfs 00:04:01.573 01:59:25 -- common/autotest_common.sh@1520 -- # bdfs=($(get_nvme_bdfs)) 00:04:01.573 01:59:25 -- common/autotest_common.sh@1520 -- # get_nvme_bdfs 00:04:01.573 01:59:25 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:01.573 01:59:25 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:01.573 01:59:25 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:01.573 01:59:25 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:01.573 01:59:25 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:01.573 01:59:25 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:01.573 01:59:25 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:01.573 01:59:25 -- common/autotest_common.sh@1522 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:04:01.573 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:01.834 Waiting for block devices as requested 00:04:01.834 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:04:01.834 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.095 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:04:02.095 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:04:07.383 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:04:07.383 01:59:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.383 01:59:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:10.0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:10.0/nvme/nvme 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:07.383 01:59:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:10.0/nvme/nvme1 00:04:07.383 01:59:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme1 00:04:07.383 01:59:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme1 00:04:07.383 01:59:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme1 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme1 00:04:07.383 01:59:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.383 01:59:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.383 01:59:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.383 01:59:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.383 01:59:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme1 00:04:07.383 01:59:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.383 01:59:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.383 01:59:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.383 01:59:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1543 -- # continue 00:04:07.383 01:59:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.383 01:59:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:11.0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:11.0/nvme/nvme 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:11.0/nvme/nvme0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme0 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.383 01:59:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.383 01:59:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.383 01:59:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.383 01:59:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.383 01:59:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.383 01:59:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.383 01:59:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1543 -- # continue 00:04:07.383 01:59:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.383 01:59:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:12.0 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:12.0/nvme/nvme 00:04:07.383 01:59:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:07.383 01:59:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 ]] 00:04:07.383 01:59:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:12.0/nvme/nvme2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme2 ]] 00:04:07.384 01:59:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.384 01:59:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.384 01:59:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.384 01:59:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.384 01:59:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.384 01:59:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.384 01:59:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.384 01:59:31 -- common/autotest_common.sh@1543 -- # continue 00:04:07.384 01:59:31 -- common/autotest_common.sh@1524 -- # for bdf in "${bdfs[@]}" 00:04:07.384 01:59:31 -- common/autotest_common.sh@1525 -- # get_nvme_ctrlr_from_bdf 0000:00:13.0 00:04:07.384 01:59:31 -- common/autotest_common.sh@1487 -- # grep 0000:00:13.0/nvme/nvme 00:04:07.384 01:59:31 -- common/autotest_common.sh@1487 -- # readlink -f /sys/class/nvme/nvme0 /sys/class/nvme/nvme1 /sys/class/nvme/nvme2 /sys/class/nvme/nvme3 00:04:07.384 01:59:31 -- common/autotest_common.sh@1487 -- # bdf_sysfs_path=/sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:07.384 01:59:31 -- common/autotest_common.sh@1488 -- # [[ -z /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 ]] 00:04:07.384 01:59:31 -- common/autotest_common.sh@1492 -- # basename /sys/devices/pci0000:00/0000:00:13.0/nvme/nvme3 00:04:07.384 01:59:31 -- common/autotest_common.sh@1492 -- # printf '%s\n' nvme3 00:04:07.384 01:59:31 -- common/autotest_common.sh@1525 -- # nvme_ctrlr=/dev/nvme3 00:04:07.384 01:59:31 -- common/autotest_common.sh@1526 -- # [[ -z /dev/nvme3 ]] 00:04:07.384 01:59:31 -- common/autotest_common.sh@1531 -- # nvme id-ctrl /dev/nvme3 00:04:07.384 01:59:31 -- common/autotest_common.sh@1531 -- # grep oacs 00:04:07.384 01:59:31 -- common/autotest_common.sh@1531 -- # cut -d: -f2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1531 -- # oacs=' 0x12a' 00:04:07.384 01:59:31 -- common/autotest_common.sh@1532 -- # oacs_ns_manage=8 00:04:07.384 01:59:31 -- common/autotest_common.sh@1534 -- # [[ 8 -ne 0 ]] 00:04:07.384 01:59:31 -- common/autotest_common.sh@1540 -- # grep unvmcap 00:04:07.384 01:59:31 -- common/autotest_common.sh@1540 -- # cut -d: -f2 00:04:07.384 01:59:31 -- common/autotest_common.sh@1540 -- # nvme id-ctrl /dev/nvme3 00:04:07.384 01:59:31 -- common/autotest_common.sh@1540 -- # unvmcap=' 0' 00:04:07.384 01:59:31 -- common/autotest_common.sh@1541 -- # [[ 0 -eq 0 ]] 00:04:07.384 01:59:31 -- common/autotest_common.sh@1543 -- # continue 00:04:07.384 01:59:31 -- spdk/autotest.sh@122 -- # timing_exit pre_cleanup 00:04:07.384 01:59:31 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:07.384 01:59:31 -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 01:59:31 -- spdk/autotest.sh@125 -- # timing_enter afterboot 00:04:07.384 01:59:31 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:07.384 01:59:31 -- common/autotest_common.sh@10 -- # set +x 00:04:07.384 01:59:31 -- spdk/autotest.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:04:07.963 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:04:08.536 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.536 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.536 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.536 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:04:08.536 01:59:33 -- spdk/autotest.sh@127 -- # timing_exit afterboot 00:04:08.536 01:59:33 -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:08.536 01:59:33 -- common/autotest_common.sh@10 -- # set +x 00:04:08.536 01:59:33 -- spdk/autotest.sh@131 -- # opal_revert_cleanup 00:04:08.536 01:59:33 -- common/autotest_common.sh@1578 -- # mapfile -t bdfs 00:04:08.536 01:59:33 -- common/autotest_common.sh@1578 -- # get_nvme_bdfs_by_id 0x0a54 00:04:08.536 01:59:33 -- common/autotest_common.sh@1563 -- # bdfs=() 00:04:08.536 01:59:33 -- common/autotest_common.sh@1563 -- # _bdfs=() 00:04:08.536 01:59:33 -- common/autotest_common.sh@1563 -- # local bdfs _bdfs 00:04:08.536 01:59:33 -- common/autotest_common.sh@1564 -- # _bdfs=($(get_nvme_bdfs)) 00:04:08.536 01:59:33 -- common/autotest_common.sh@1564 -- # get_nvme_bdfs 00:04:08.536 01:59:33 -- common/autotest_common.sh@1498 -- # bdfs=() 00:04:08.536 01:59:33 -- common/autotest_common.sh@1498 -- # local bdfs 00:04:08.536 01:59:33 -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:04:08.536 01:59:33 -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:04:08.536 01:59:33 -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:04:08.536 01:59:33 -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:04:08.536 01:59:33 -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:04:08.536 01:59:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.536 01:59:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:10.0/device 00:04:08.536 01:59:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.536 01:59:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.536 01:59:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.536 01:59:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:11.0/device 00:04:08.536 01:59:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.536 01:59:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.536 01:59:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.536 01:59:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:12.0/device 00:04:08.536 01:59:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.536 01:59:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.536 01:59:33 -- common/autotest_common.sh@1565 -- # for bdf in "${_bdfs[@]}" 00:04:08.536 01:59:33 -- common/autotest_common.sh@1566 -- # cat /sys/bus/pci/devices/0000:00:13.0/device 00:04:08.536 01:59:33 -- common/autotest_common.sh@1566 -- # device=0x0010 00:04:08.536 01:59:33 -- common/autotest_common.sh@1567 -- # [[ 0x0010 == \0\x\0\a\5\4 ]] 00:04:08.536 01:59:33 -- common/autotest_common.sh@1572 -- # (( 0 > 0 )) 00:04:08.536 01:59:33 -- common/autotest_common.sh@1572 -- # return 0 00:04:08.536 01:59:33 -- common/autotest_common.sh@1579 -- # [[ -z '' ]] 00:04:08.536 01:59:33 -- common/autotest_common.sh@1580 -- # return 0 00:04:08.536 01:59:33 -- spdk/autotest.sh@137 -- # '[' 0 -eq 1 ']' 00:04:08.797 01:59:33 -- spdk/autotest.sh@141 -- # '[' 1 -eq 1 ']' 00:04:08.797 01:59:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.797 01:59:33 -- spdk/autotest.sh@142 -- # [[ 0 -eq 1 ]] 00:04:08.797 01:59:33 -- spdk/autotest.sh@149 -- # timing_enter lib 00:04:08.797 01:59:33 -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:08.797 01:59:33 -- common/autotest_common.sh@10 -- # set +x 00:04:08.797 01:59:33 -- spdk/autotest.sh@151 -- # [[ 0 -eq 1 ]] 00:04:08.797 01:59:33 -- spdk/autotest.sh@155 -- # run_test env /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.797 01:59:33 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.797 01:59:33 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.797 01:59:33 -- common/autotest_common.sh@10 -- # set +x 00:04:08.797 ************************************ 00:04:08.797 START TEST env 00:04:08.797 ************************************ 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env.sh 00:04:08.797 * Looking for test storage... 00:04:08.797 * Found test storage at /home/vagrant/spdk_repo/spdk/test/env 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1711 -- # lcov --version 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:08.797 01:59:33 env -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:08.797 01:59:33 env -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:08.797 01:59:33 env -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:08.797 01:59:33 env -- scripts/common.sh@336 -- # IFS=.-: 00:04:08.797 01:59:33 env -- scripts/common.sh@336 -- # read -ra ver1 00:04:08.797 01:59:33 env -- scripts/common.sh@337 -- # IFS=.-: 00:04:08.797 01:59:33 env -- scripts/common.sh@337 -- # read -ra ver2 00:04:08.797 01:59:33 env -- scripts/common.sh@338 -- # local 'op=<' 00:04:08.797 01:59:33 env -- scripts/common.sh@340 -- # ver1_l=2 00:04:08.797 01:59:33 env -- scripts/common.sh@341 -- # ver2_l=1 00:04:08.797 01:59:33 env -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:08.797 01:59:33 env -- scripts/common.sh@344 -- # case "$op" in 00:04:08.797 01:59:33 env -- scripts/common.sh@345 -- # : 1 00:04:08.797 01:59:33 env -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:08.797 01:59:33 env -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:08.797 01:59:33 env -- scripts/common.sh@365 -- # decimal 1 00:04:08.797 01:59:33 env -- scripts/common.sh@353 -- # local d=1 00:04:08.797 01:59:33 env -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:08.797 01:59:33 env -- scripts/common.sh@355 -- # echo 1 00:04:08.797 01:59:33 env -- scripts/common.sh@365 -- # ver1[v]=1 00:04:08.797 01:59:33 env -- scripts/common.sh@366 -- # decimal 2 00:04:08.797 01:59:33 env -- scripts/common.sh@353 -- # local d=2 00:04:08.797 01:59:33 env -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:08.797 01:59:33 env -- scripts/common.sh@355 -- # echo 2 00:04:08.797 01:59:33 env -- scripts/common.sh@366 -- # ver2[v]=2 00:04:08.797 01:59:33 env -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:08.797 01:59:33 env -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:08.797 01:59:33 env -- scripts/common.sh@368 -- # return 0 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:08.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.797 --rc genhtml_branch_coverage=1 00:04:08.797 --rc genhtml_function_coverage=1 00:04:08.797 --rc genhtml_legend=1 00:04:08.797 --rc geninfo_all_blocks=1 00:04:08.797 --rc geninfo_unexecuted_blocks=1 00:04:08.797 00:04:08.797 ' 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:08.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.797 --rc genhtml_branch_coverage=1 00:04:08.797 --rc genhtml_function_coverage=1 00:04:08.797 --rc genhtml_legend=1 00:04:08.797 --rc geninfo_all_blocks=1 00:04:08.797 --rc geninfo_unexecuted_blocks=1 00:04:08.797 00:04:08.797 ' 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:08.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.797 --rc genhtml_branch_coverage=1 00:04:08.797 --rc genhtml_function_coverage=1 00:04:08.797 --rc genhtml_legend=1 00:04:08.797 --rc geninfo_all_blocks=1 00:04:08.797 --rc geninfo_unexecuted_blocks=1 00:04:08.797 00:04:08.797 ' 00:04:08.797 01:59:33 env -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:08.797 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:08.797 --rc genhtml_branch_coverage=1 00:04:08.797 --rc genhtml_function_coverage=1 00:04:08.797 --rc genhtml_legend=1 00:04:08.797 --rc geninfo_all_blocks=1 00:04:08.797 --rc geninfo_unexecuted_blocks=1 00:04:08.797 00:04:08.797 ' 00:04:08.798 01:59:33 env -- env/env.sh@10 -- # run_test env_memory /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.798 01:59:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:08.798 01:59:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:08.798 01:59:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:08.798 ************************************ 00:04:08.798 START TEST env_memory 00:04:08.798 ************************************ 00:04:08.798 01:59:33 env.env_memory -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/memory/memory_ut 00:04:08.798 00:04:08.798 00:04:08.798 CUnit - A unit testing framework for C - Version 2.1-3 00:04:08.798 http://cunit.sourceforge.net/ 00:04:08.798 00:04:08.798 00:04:08.798 Suite: memory 00:04:08.798 Test: alloc and free memory map ...[2024-12-15 01:59:33.540436] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 283:spdk_mem_map_alloc: *ERROR*: Initial mem_map notify failed 00:04:09.059 passed 00:04:09.059 Test: mem map translation ...[2024-12-15 01:59:33.579257] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=2097152 len=1234 00:04:09.059 [2024-12-15 01:59:33.579308] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 595:spdk_mem_map_set_translation: *ERROR*: invalid spdk_mem_map_set_translation parameters, vaddr=1234 len=2097152 00:04:09.059 [2024-12-15 01:59:33.579371] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 589:spdk_mem_map_set_translation: *ERROR*: invalid usermode virtual address 281474976710656 00:04:09.059 [2024-12-15 01:59:33.579387] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 605:spdk_mem_map_set_translation: *ERROR*: could not get 0xffffffe00000 map 00:04:09.059 passed 00:04:09.059 Test: mem map registration ...[2024-12-15 01:59:33.647515] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=200000 len=1234 00:04:09.059 [2024-12-15 01:59:33.647565] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/memory.c: 347:spdk_mem_register: *ERROR*: invalid spdk_mem_register parameters, vaddr=4d2 len=2097152 00:04:09.059 passed 00:04:09.059 Test: mem map adjacent registrations ...passed 00:04:09.059 00:04:09.059 Run Summary: Type Total Ran Passed Failed Inactive 00:04:09.059 suites 1 1 n/a 0 0 00:04:09.059 tests 4 4 4 0 0 00:04:09.059 asserts 152 152 152 0 n/a 00:04:09.059 00:04:09.059 Elapsed time = 0.233 seconds 00:04:09.059 00:04:09.059 real 0m0.266s 00:04:09.059 user 0m0.245s 00:04:09.059 sys 0m0.015s 00:04:09.059 01:59:33 env.env_memory -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:09.059 ************************************ 00:04:09.059 END TEST env_memory 00:04:09.059 ************************************ 00:04:09.059 01:59:33 env.env_memory -- common/autotest_common.sh@10 -- # set +x 00:04:09.059 01:59:33 env -- env/env.sh@11 -- # run_test env_vtophys /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:09.059 01:59:33 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:09.059 01:59:33 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:09.059 01:59:33 env -- common/autotest_common.sh@10 -- # set +x 00:04:09.320 ************************************ 00:04:09.320 START TEST env_vtophys 00:04:09.320 ************************************ 00:04:09.320 01:59:33 env.env_vtophys -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/vtophys/vtophys 00:04:09.320 EAL: lib.eal log level changed from notice to debug 00:04:09.320 EAL: Detected lcore 0 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 1 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 2 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 3 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 4 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 5 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 6 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 7 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 8 as core 0 on socket 0 00:04:09.320 EAL: Detected lcore 9 as core 0 on socket 0 00:04:09.320 EAL: Maximum logical cores by configuration: 128 00:04:09.320 EAL: Detected CPU lcores: 10 00:04:09.320 EAL: Detected NUMA nodes: 1 00:04:09.320 EAL: Checking presence of .so 'librte_eal.so.24.1' 00:04:09.320 EAL: Detected shared linkage of DPDK 00:04:09.320 EAL: No shared files mode enabled, IPC will be disabled 00:04:09.320 EAL: Selected IOVA mode 'PA' 00:04:09.320 EAL: Probing VFIO support... 00:04:09.320 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:09.320 EAL: VFIO modules not loaded, skipping VFIO support... 00:04:09.320 EAL: Ask a virtual area of 0x2e000 bytes 00:04:09.320 EAL: Virtual area found at 0x200000000000 (size = 0x2e000) 00:04:09.320 EAL: Setting up physically contiguous memory... 00:04:09.320 EAL: Setting maximum number of open files to 524288 00:04:09.320 EAL: Detected memory type: socket_id:0 hugepage_sz:2097152 00:04:09.320 EAL: Creating 4 segment lists: n_segs:8192 socket_id:0 hugepage_sz:2097152 00:04:09.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.320 EAL: Virtual area found at 0x20000002e000 (size = 0x61000) 00:04:09.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.320 EAL: Virtual area found at 0x200000200000 (size = 0x400000000) 00:04:09.320 EAL: VA reserved for memseg list at 0x200000200000, size 400000000 00:04:09.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.320 EAL: Virtual area found at 0x200400200000 (size = 0x61000) 00:04:09.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.320 EAL: Virtual area found at 0x200400400000 (size = 0x400000000) 00:04:09.320 EAL: VA reserved for memseg list at 0x200400400000, size 400000000 00:04:09.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.320 EAL: Virtual area found at 0x200800400000 (size = 0x61000) 00:04:09.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.320 EAL: Virtual area found at 0x200800600000 (size = 0x400000000) 00:04:09.320 EAL: VA reserved for memseg list at 0x200800600000, size 400000000 00:04:09.320 EAL: Ask a virtual area of 0x61000 bytes 00:04:09.320 EAL: Virtual area found at 0x200c00600000 (size = 0x61000) 00:04:09.320 EAL: Memseg list allocated at socket 0, page size 0x800kB 00:04:09.320 EAL: Ask a virtual area of 0x400000000 bytes 00:04:09.320 EAL: Virtual area found at 0x200c00800000 (size = 0x400000000) 00:04:09.320 EAL: VA reserved for memseg list at 0x200c00800000, size 400000000 00:04:09.320 EAL: Hugepages will be freed exactly as allocated. 00:04:09.320 EAL: No shared files mode enabled, IPC is disabled 00:04:09.320 EAL: No shared files mode enabled, IPC is disabled 00:04:09.320 EAL: TSC frequency is ~2600000 KHz 00:04:09.320 EAL: Main lcore 0 is ready (tid=7f344489fa40;cpuset=[0]) 00:04:09.320 EAL: Trying to obtain current memory policy. 00:04:09.320 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.320 EAL: Restoring previous memory policy: 0 00:04:09.320 EAL: request: mp_malloc_sync 00:04:09.320 EAL: No shared files mode enabled, IPC is disabled 00:04:09.320 EAL: Heap on socket 0 was expanded by 2MB 00:04:09.320 EAL: Module /sys/module/vfio not found! error 2 (No such file or directory) 00:04:09.320 EAL: No PCI address specified using 'addr=' in: bus=pci 00:04:09.320 EAL: Mem event callback 'spdk:(nil)' registered 00:04:09.320 EAL: Module /sys/module/vfio_pci not found! error 2 (No such file or directory) 00:04:09.320 00:04:09.320 00:04:09.320 CUnit - A unit testing framework for C - Version 2.1-3 00:04:09.320 http://cunit.sourceforge.net/ 00:04:09.320 00:04:09.320 00:04:09.320 Suite: components_suite 00:04:09.892 Test: vtophys_malloc_test ...passed 00:04:09.892 Test: vtophys_spdk_malloc_test ...EAL: Trying to obtain current memory policy. 00:04:09.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.892 EAL: Restoring previous memory policy: 4 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was expanded by 4MB 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was shrunk by 4MB 00:04:09.892 EAL: Trying to obtain current memory policy. 00:04:09.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.892 EAL: Restoring previous memory policy: 4 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was expanded by 6MB 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was shrunk by 6MB 00:04:09.892 EAL: Trying to obtain current memory policy. 00:04:09.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.892 EAL: Restoring previous memory policy: 4 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was expanded by 10MB 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was shrunk by 10MB 00:04:09.892 EAL: Trying to obtain current memory policy. 00:04:09.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.892 EAL: Restoring previous memory policy: 4 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was expanded by 18MB 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was shrunk by 18MB 00:04:09.892 EAL: Trying to obtain current memory policy. 00:04:09.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.892 EAL: Restoring previous memory policy: 4 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was expanded by 34MB 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was shrunk by 34MB 00:04:09.892 EAL: Trying to obtain current memory policy. 00:04:09.892 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:09.892 EAL: Restoring previous memory policy: 4 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was expanded by 66MB 00:04:09.892 EAL: Calling mem event callback 'spdk:(nil)' 00:04:09.892 EAL: request: mp_malloc_sync 00:04:09.892 EAL: No shared files mode enabled, IPC is disabled 00:04:09.892 EAL: Heap on socket 0 was shrunk by 66MB 00:04:10.153 EAL: Trying to obtain current memory policy. 00:04:10.153 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.153 EAL: Restoring previous memory policy: 4 00:04:10.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.153 EAL: request: mp_malloc_sync 00:04:10.153 EAL: No shared files mode enabled, IPC is disabled 00:04:10.153 EAL: Heap on socket 0 was expanded by 130MB 00:04:10.153 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.153 EAL: request: mp_malloc_sync 00:04:10.153 EAL: No shared files mode enabled, IPC is disabled 00:04:10.153 EAL: Heap on socket 0 was shrunk by 130MB 00:04:10.457 EAL: Trying to obtain current memory policy. 00:04:10.457 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:10.457 EAL: Restoring previous memory policy: 4 00:04:10.457 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.457 EAL: request: mp_malloc_sync 00:04:10.457 EAL: No shared files mode enabled, IPC is disabled 00:04:10.457 EAL: Heap on socket 0 was expanded by 258MB 00:04:10.717 EAL: Calling mem event callback 'spdk:(nil)' 00:04:10.717 EAL: request: mp_malloc_sync 00:04:10.717 EAL: No shared files mode enabled, IPC is disabled 00:04:10.717 EAL: Heap on socket 0 was shrunk by 258MB 00:04:10.978 EAL: Trying to obtain current memory policy. 00:04:10.978 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:11.239 EAL: Restoring previous memory policy: 4 00:04:11.239 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.239 EAL: request: mp_malloc_sync 00:04:11.239 EAL: No shared files mode enabled, IPC is disabled 00:04:11.239 EAL: Heap on socket 0 was expanded by 514MB 00:04:11.811 EAL: Calling mem event callback 'spdk:(nil)' 00:04:11.811 EAL: request: mp_malloc_sync 00:04:11.811 EAL: No shared files mode enabled, IPC is disabled 00:04:11.811 EAL: Heap on socket 0 was shrunk by 514MB 00:04:12.382 EAL: Trying to obtain current memory policy. 00:04:12.382 EAL: Setting policy MPOL_PREFERRED for socket 0 00:04:12.643 EAL: Restoring previous memory policy: 4 00:04:12.643 EAL: Calling mem event callback 'spdk:(nil)' 00:04:12.643 EAL: request: mp_malloc_sync 00:04:12.643 EAL: No shared files mode enabled, IPC is disabled 00:04:12.643 EAL: Heap on socket 0 was expanded by 1026MB 00:04:13.584 EAL: Calling mem event callback 'spdk:(nil)' 00:04:13.843 EAL: request: mp_malloc_sync 00:04:13.843 EAL: No shared files mode enabled, IPC is disabled 00:04:13.843 EAL: Heap on socket 0 was shrunk by 1026MB 00:04:14.409 passed 00:04:14.409 00:04:14.409 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.409 suites 1 1 n/a 0 0 00:04:14.409 tests 2 2 2 0 0 00:04:14.409 asserts 5747 5747 5747 0 n/a 00:04:14.409 00:04:14.409 Elapsed time = 5.087 seconds 00:04:14.409 EAL: Calling mem event callback 'spdk:(nil)' 00:04:14.409 EAL: request: mp_malloc_sync 00:04:14.409 EAL: No shared files mode enabled, IPC is disabled 00:04:14.409 EAL: Heap on socket 0 was shrunk by 2MB 00:04:14.409 EAL: No shared files mode enabled, IPC is disabled 00:04:14.409 EAL: No shared files mode enabled, IPC is disabled 00:04:14.409 EAL: No shared files mode enabled, IPC is disabled 00:04:14.669 00:04:14.669 real 0m5.370s 00:04:14.669 user 0m4.348s 00:04:14.669 sys 0m0.866s 00:04:14.669 01:59:39 env.env_vtophys -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.669 ************************************ 00:04:14.669 END TEST env_vtophys 00:04:14.669 ************************************ 00:04:14.669 01:59:39 env.env_vtophys -- common/autotest_common.sh@10 -- # set +x 00:04:14.669 01:59:39 env -- env/env.sh@12 -- # run_test env_pci /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.669 01:59:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.669 01:59:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.669 01:59:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.669 ************************************ 00:04:14.669 START TEST env_pci 00:04:14.669 ************************************ 00:04:14.669 01:59:39 env.env_pci -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/pci/pci_ut 00:04:14.669 00:04:14.669 00:04:14.669 CUnit - A unit testing framework for C - Version 2.1-3 00:04:14.669 http://cunit.sourceforge.net/ 00:04:14.669 00:04:14.670 00:04:14.670 Suite: pci 00:04:14.670 Test: pci_hook ...[2024-12-15 01:59:39.275156] /home/vagrant/spdk_repo/spdk/lib/env_dpdk/pci.c:1117:spdk_pci_device_claim: *ERROR*: Cannot create lock on device /var/tmp/spdk_pci_lock_10000:00:01.0, probably process 58827 has claimed it 00:04:14.670 passed 00:04:14.670 00:04:14.670 Run Summary: Type Total Ran Passed Failed Inactive 00:04:14.670 suites 1 1 n/a 0 0 00:04:14.670 tests 1 1 1 0 0 00:04:14.670 asserts 25 25 25 0 n/a 00:04:14.670 00:04:14.670 Elapsed time = 0.007 seconds 00:04:14.670 EAL: Cannot find device (10000:00:01.0) 00:04:14.670 EAL: Failed to attach device on primary process 00:04:14.670 00:04:14.670 real 0m0.060s 00:04:14.670 user 0m0.026s 00:04:14.670 sys 0m0.033s 00:04:14.670 ************************************ 00:04:14.670 END TEST env_pci 00:04:14.670 ************************************ 00:04:14.670 01:59:39 env.env_pci -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.670 01:59:39 env.env_pci -- common/autotest_common.sh@10 -- # set +x 00:04:14.670 01:59:39 env -- env/env.sh@14 -- # argv='-c 0x1 ' 00:04:14.670 01:59:39 env -- env/env.sh@15 -- # uname 00:04:14.670 01:59:39 env -- env/env.sh@15 -- # '[' Linux = Linux ']' 00:04:14.670 01:59:39 env -- env/env.sh@22 -- # argv+=--base-virtaddr=0x200000000000 00:04:14.670 01:59:39 env -- env/env.sh@24 -- # run_test env_dpdk_post_init /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.670 01:59:39 env -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:04:14.670 01:59:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.670 01:59:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.670 ************************************ 00:04:14.670 START TEST env_dpdk_post_init 00:04:14.670 ************************************ 00:04:14.670 01:59:39 env.env_dpdk_post_init -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/env_dpdk_post_init/env_dpdk_post_init -c 0x1 --base-virtaddr=0x200000000000 00:04:14.670 EAL: Detected CPU lcores: 10 00:04:14.670 EAL: Detected NUMA nodes: 1 00:04:14.670 EAL: Detected shared linkage of DPDK 00:04:14.929 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:14.929 EAL: Selected IOVA mode 'PA' 00:04:14.929 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:14.929 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:10.0 (socket -1) 00:04:14.929 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:11.0 (socket -1) 00:04:14.929 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:12.0 (socket -1) 00:04:14.929 EAL: Probe PCI driver: spdk_nvme (1b36:0010) device: 0000:00:13.0 (socket -1) 00:04:14.929 Starting DPDK initialization... 00:04:14.929 Starting SPDK post initialization... 00:04:14.929 SPDK NVMe probe 00:04:14.929 Attaching to 0000:00:10.0 00:04:14.929 Attaching to 0000:00:11.0 00:04:14.929 Attaching to 0000:00:12.0 00:04:14.929 Attaching to 0000:00:13.0 00:04:14.929 Attached to 0000:00:10.0 00:04:14.929 Attached to 0000:00:11.0 00:04:14.929 Attached to 0000:00:13.0 00:04:14.929 Attached to 0000:00:12.0 00:04:14.929 Cleaning up... 00:04:14.929 00:04:14.929 real 0m0.238s 00:04:14.929 user 0m0.073s 00:04:14.929 sys 0m0.066s 00:04:14.929 01:59:39 env.env_dpdk_post_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:14.929 01:59:39 env.env_dpdk_post_init -- common/autotest_common.sh@10 -- # set +x 00:04:14.929 ************************************ 00:04:14.929 END TEST env_dpdk_post_init 00:04:14.929 ************************************ 00:04:14.929 01:59:39 env -- env/env.sh@26 -- # uname 00:04:14.929 01:59:39 env -- env/env.sh@26 -- # '[' Linux = Linux ']' 00:04:14.929 01:59:39 env -- env/env.sh@29 -- # run_test env_mem_callbacks /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:14.929 01:59:39 env -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:14.929 01:59:39 env -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:14.929 01:59:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:14.929 ************************************ 00:04:14.929 START TEST env_mem_callbacks 00:04:14.929 ************************************ 00:04:14.929 01:59:39 env.env_mem_callbacks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/env/mem_callbacks/mem_callbacks 00:04:15.190 EAL: Detected CPU lcores: 10 00:04:15.190 EAL: Detected NUMA nodes: 1 00:04:15.190 EAL: Detected shared linkage of DPDK 00:04:15.190 EAL: Multi-process socket /var/run/dpdk/rte/mp_socket 00:04:15.190 EAL: Selected IOVA mode 'PA' 00:04:15.190 00:04:15.190 00:04:15.190 CUnit - A unit testing framework for C - Version 2.1-3 00:04:15.190 http://cunit.sourceforge.net/ 00:04:15.190 00:04:15.190 00:04:15.190 Suite: memory 00:04:15.190 Test: test ... 00:04:15.190 register 0x200000200000 2097152 00:04:15.190 malloc 3145728 00:04:15.190 TELEMETRY: No legacy callbacks, legacy socket not created 00:04:15.190 register 0x200000400000 4194304 00:04:15.190 buf 0x2000004fffc0 len 3145728 PASSED 00:04:15.190 malloc 64 00:04:15.190 buf 0x2000004ffec0 len 64 PASSED 00:04:15.190 malloc 4194304 00:04:15.190 register 0x200000800000 6291456 00:04:15.190 buf 0x2000009fffc0 len 4194304 PASSED 00:04:15.190 free 0x2000004fffc0 3145728 00:04:15.190 free 0x2000004ffec0 64 00:04:15.190 unregister 0x200000400000 4194304 PASSED 00:04:15.190 free 0x2000009fffc0 4194304 00:04:15.190 unregister 0x200000800000 6291456 PASSED 00:04:15.190 malloc 8388608 00:04:15.190 register 0x200000400000 10485760 00:04:15.190 buf 0x2000005fffc0 len 8388608 PASSED 00:04:15.190 free 0x2000005fffc0 8388608 00:04:15.190 unregister 0x200000400000 10485760 PASSED 00:04:15.190 passed 00:04:15.190 00:04:15.190 Run Summary: Type Total Ran Passed Failed Inactive 00:04:15.190 suites 1 1 n/a 0 0 00:04:15.190 tests 1 1 1 0 0 00:04:15.190 asserts 15 15 15 0 n/a 00:04:15.190 00:04:15.190 Elapsed time = 0.038 seconds 00:04:15.190 00:04:15.190 real 0m0.193s 00:04:15.190 user 0m0.049s 00:04:15.190 sys 0m0.043s 00:04:15.190 01:59:39 env.env_mem_callbacks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.190 01:59:39 env.env_mem_callbacks -- common/autotest_common.sh@10 -- # set +x 00:04:15.190 ************************************ 00:04:15.190 END TEST env_mem_callbacks 00:04:15.190 ************************************ 00:04:15.190 00:04:15.190 real 0m6.599s 00:04:15.190 user 0m4.911s 00:04:15.190 sys 0m1.234s 00:04:15.190 01:59:39 env -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:15.190 ************************************ 00:04:15.190 END TEST env 00:04:15.190 ************************************ 00:04:15.190 01:59:39 env -- common/autotest_common.sh@10 -- # set +x 00:04:15.452 01:59:39 -- spdk/autotest.sh@156 -- # run_test rpc /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.452 01:59:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:15.452 01:59:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:15.452 01:59:39 -- common/autotest_common.sh@10 -- # set +x 00:04:15.452 ************************************ 00:04:15.452 START TEST rpc 00:04:15.452 ************************************ 00:04:15.452 01:59:39 rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/rpc.sh 00:04:15.452 * Looking for test storage... 00:04:15.452 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:15.452 01:59:40 rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:15.452 01:59:40 rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:15.452 01:59:40 rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:15.452 01:59:40 rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:15.452 01:59:40 rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:15.452 01:59:40 rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:15.452 01:59:40 rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:15.452 01:59:40 rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:15.452 01:59:40 rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:15.452 01:59:40 rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:15.452 01:59:40 rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:15.452 01:59:40 rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:15.452 01:59:40 rpc -- scripts/common.sh@345 -- # : 1 00:04:15.452 01:59:40 rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:15.452 01:59:40 rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:15.452 01:59:40 rpc -- scripts/common.sh@365 -- # decimal 1 00:04:15.452 01:59:40 rpc -- scripts/common.sh@353 -- # local d=1 00:04:15.452 01:59:40 rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:15.452 01:59:40 rpc -- scripts/common.sh@355 -- # echo 1 00:04:15.452 01:59:40 rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:15.452 01:59:40 rpc -- scripts/common.sh@366 -- # decimal 2 00:04:15.452 01:59:40 rpc -- scripts/common.sh@353 -- # local d=2 00:04:15.452 01:59:40 rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:15.452 01:59:40 rpc -- scripts/common.sh@355 -- # echo 2 00:04:15.452 01:59:40 rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:15.452 01:59:40 rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:15.452 01:59:40 rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:15.452 01:59:40 rpc -- scripts/common.sh@368 -- # return 0 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:15.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.452 --rc genhtml_branch_coverage=1 00:04:15.452 --rc genhtml_function_coverage=1 00:04:15.452 --rc genhtml_legend=1 00:04:15.452 --rc geninfo_all_blocks=1 00:04:15.452 --rc geninfo_unexecuted_blocks=1 00:04:15.452 00:04:15.452 ' 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:15.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.452 --rc genhtml_branch_coverage=1 00:04:15.452 --rc genhtml_function_coverage=1 00:04:15.452 --rc genhtml_legend=1 00:04:15.452 --rc geninfo_all_blocks=1 00:04:15.452 --rc geninfo_unexecuted_blocks=1 00:04:15.452 00:04:15.452 ' 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:15.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.452 --rc genhtml_branch_coverage=1 00:04:15.452 --rc genhtml_function_coverage=1 00:04:15.452 --rc genhtml_legend=1 00:04:15.452 --rc geninfo_all_blocks=1 00:04:15.452 --rc geninfo_unexecuted_blocks=1 00:04:15.452 00:04:15.452 ' 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:15.452 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:15.452 --rc genhtml_branch_coverage=1 00:04:15.452 --rc genhtml_function_coverage=1 00:04:15.452 --rc genhtml_legend=1 00:04:15.452 --rc geninfo_all_blocks=1 00:04:15.452 --rc geninfo_unexecuted_blocks=1 00:04:15.452 00:04:15.452 ' 00:04:15.452 01:59:40 rpc -- rpc/rpc.sh@65 -- # spdk_pid=58954 00:04:15.452 01:59:40 rpc -- rpc/rpc.sh@66 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:15.452 01:59:40 rpc -- rpc/rpc.sh@67 -- # waitforlisten 58954 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@835 -- # '[' -z 58954 ']' 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:15.452 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:15.452 01:59:40 rpc -- rpc/rpc.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -e bdev 00:04:15.452 01:59:40 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:15.452 [2024-12-15 01:59:40.209885] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:15.452 [2024-12-15 01:59:40.210041] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid58954 ] 00:04:15.711 [2024-12-15 01:59:40.370047] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:15.711 [2024-12-15 01:59:40.455088] app.c: 612:app_setup_trace: *NOTICE*: Tracepoint Group Mask bdev specified. 00:04:15.711 [2024-12-15 01:59:40.455130] app.c: 613:app_setup_trace: *NOTICE*: Use 'spdk_trace -s spdk_tgt -p 58954' to capture a snapshot of events at runtime. 00:04:15.711 [2024-12-15 01:59:40.455138] app.c: 618:app_setup_trace: *NOTICE*: 'spdk_trace' without parameters will also work if this is the only 00:04:15.711 [2024-12-15 01:59:40.455145] app.c: 619:app_setup_trace: *NOTICE*: SPDK application currently running. 00:04:15.711 [2024-12-15 01:59:40.455151] app.c: 620:app_setup_trace: *NOTICE*: Or copy /dev/shm/spdk_tgt_trace.pid58954 for offline analysis/debug. 00:04:15.711 [2024-12-15 01:59:40.455824] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:16.306 01:59:41 rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:16.306 01:59:41 rpc -- common/autotest_common.sh@868 -- # return 0 00:04:16.306 01:59:41 rpc -- rpc/rpc.sh@69 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.306 01:59:41 rpc -- rpc/rpc.sh@69 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/test/rpc 00:04:16.307 01:59:41 rpc -- rpc/rpc.sh@72 -- # rpc=rpc_cmd 00:04:16.307 01:59:41 rpc -- rpc/rpc.sh@73 -- # run_test rpc_integrity rpc_integrity 00:04:16.307 01:59:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.307 01:59:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.307 01:59:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.307 ************************************ 00:04:16.307 START TEST rpc_integrity 00:04:16.307 ************************************ 00:04:16.307 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:16.307 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:16.307 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.307 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.307 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.307 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:16.307 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc0 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:16.565 { 00:04:16.565 "name": "Malloc0", 00:04:16.565 "aliases": [ 00:04:16.565 "4e3d9fe4-960a-4c41-9dfe-73e193158b27" 00:04:16.565 ], 00:04:16.565 "product_name": "Malloc disk", 00:04:16.565 "block_size": 512, 00:04:16.565 "num_blocks": 16384, 00:04:16.565 "uuid": "4e3d9fe4-960a-4c41-9dfe-73e193158b27", 00:04:16.565 "assigned_rate_limits": { 00:04:16.565 "rw_ios_per_sec": 0, 00:04:16.565 "rw_mbytes_per_sec": 0, 00:04:16.565 "r_mbytes_per_sec": 0, 00:04:16.565 "w_mbytes_per_sec": 0 00:04:16.565 }, 00:04:16.565 "claimed": false, 00:04:16.565 "zoned": false, 00:04:16.565 "supported_io_types": { 00:04:16.565 "read": true, 00:04:16.565 "write": true, 00:04:16.565 "unmap": true, 00:04:16.565 "flush": true, 00:04:16.565 "reset": true, 00:04:16.565 "nvme_admin": false, 00:04:16.565 "nvme_io": false, 00:04:16.565 "nvme_io_md": false, 00:04:16.565 "write_zeroes": true, 00:04:16.565 "zcopy": true, 00:04:16.565 "get_zone_info": false, 00:04:16.565 "zone_management": false, 00:04:16.565 "zone_append": false, 00:04:16.565 "compare": false, 00:04:16.565 "compare_and_write": false, 00:04:16.565 "abort": true, 00:04:16.565 "seek_hole": false, 00:04:16.565 "seek_data": false, 00:04:16.565 "copy": true, 00:04:16.565 "nvme_iov_md": false 00:04:16.565 }, 00:04:16.565 "memory_domains": [ 00:04:16.565 { 00:04:16.565 "dma_device_id": "system", 00:04:16.565 "dma_device_type": 1 00:04:16.565 }, 00:04:16.565 { 00:04:16.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.565 "dma_device_type": 2 00:04:16.565 } 00:04:16.565 ], 00:04:16.565 "driver_specific": {} 00:04:16.565 } 00:04:16.565 ]' 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc0 -p Passthru0 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 [2024-12-15 01:59:41.152581] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc0 00:04:16.565 [2024-12-15 01:59:41.152627] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:16.565 [2024-12-15 01:59:41.152646] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000008480 00:04:16.565 [2024-12-15 01:59:41.152655] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:16.565 [2024-12-15 01:59:41.154343] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:16.565 [2024-12-15 01:59:41.154374] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:16.565 Passthru0 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:16.565 { 00:04:16.565 "name": "Malloc0", 00:04:16.565 "aliases": [ 00:04:16.565 "4e3d9fe4-960a-4c41-9dfe-73e193158b27" 00:04:16.565 ], 00:04:16.565 "product_name": "Malloc disk", 00:04:16.565 "block_size": 512, 00:04:16.565 "num_blocks": 16384, 00:04:16.565 "uuid": "4e3d9fe4-960a-4c41-9dfe-73e193158b27", 00:04:16.565 "assigned_rate_limits": { 00:04:16.565 "rw_ios_per_sec": 0, 00:04:16.565 "rw_mbytes_per_sec": 0, 00:04:16.565 "r_mbytes_per_sec": 0, 00:04:16.565 "w_mbytes_per_sec": 0 00:04:16.565 }, 00:04:16.565 "claimed": true, 00:04:16.565 "claim_type": "exclusive_write", 00:04:16.565 "zoned": false, 00:04:16.565 "supported_io_types": { 00:04:16.565 "read": true, 00:04:16.565 "write": true, 00:04:16.565 "unmap": true, 00:04:16.565 "flush": true, 00:04:16.565 "reset": true, 00:04:16.565 "nvme_admin": false, 00:04:16.565 "nvme_io": false, 00:04:16.565 "nvme_io_md": false, 00:04:16.565 "write_zeroes": true, 00:04:16.565 "zcopy": true, 00:04:16.565 "get_zone_info": false, 00:04:16.565 "zone_management": false, 00:04:16.565 "zone_append": false, 00:04:16.565 "compare": false, 00:04:16.565 "compare_and_write": false, 00:04:16.565 "abort": true, 00:04:16.565 "seek_hole": false, 00:04:16.565 "seek_data": false, 00:04:16.565 "copy": true, 00:04:16.565 "nvme_iov_md": false 00:04:16.565 }, 00:04:16.565 "memory_domains": [ 00:04:16.565 { 00:04:16.565 "dma_device_id": "system", 00:04:16.565 "dma_device_type": 1 00:04:16.565 }, 00:04:16.565 { 00:04:16.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.565 "dma_device_type": 2 00:04:16.565 } 00:04:16.565 ], 00:04:16.565 "driver_specific": {} 00:04:16.565 }, 00:04:16.565 { 00:04:16.565 "name": "Passthru0", 00:04:16.565 "aliases": [ 00:04:16.565 "2fe735e5-c1c2-5b47-9173-03ef57f3cdc4" 00:04:16.565 ], 00:04:16.565 "product_name": "passthru", 00:04:16.565 "block_size": 512, 00:04:16.565 "num_blocks": 16384, 00:04:16.565 "uuid": "2fe735e5-c1c2-5b47-9173-03ef57f3cdc4", 00:04:16.565 "assigned_rate_limits": { 00:04:16.565 "rw_ios_per_sec": 0, 00:04:16.565 "rw_mbytes_per_sec": 0, 00:04:16.565 "r_mbytes_per_sec": 0, 00:04:16.565 "w_mbytes_per_sec": 0 00:04:16.565 }, 00:04:16.565 "claimed": false, 00:04:16.565 "zoned": false, 00:04:16.565 "supported_io_types": { 00:04:16.565 "read": true, 00:04:16.565 "write": true, 00:04:16.565 "unmap": true, 00:04:16.565 "flush": true, 00:04:16.565 "reset": true, 00:04:16.565 "nvme_admin": false, 00:04:16.565 "nvme_io": false, 00:04:16.565 "nvme_io_md": false, 00:04:16.565 "write_zeroes": true, 00:04:16.565 "zcopy": true, 00:04:16.565 "get_zone_info": false, 00:04:16.565 "zone_management": false, 00:04:16.565 "zone_append": false, 00:04:16.565 "compare": false, 00:04:16.565 "compare_and_write": false, 00:04:16.565 "abort": true, 00:04:16.565 "seek_hole": false, 00:04:16.565 "seek_data": false, 00:04:16.565 "copy": true, 00:04:16.565 "nvme_iov_md": false 00:04:16.565 }, 00:04:16.565 "memory_domains": [ 00:04:16.565 { 00:04:16.565 "dma_device_id": "system", 00:04:16.565 "dma_device_type": 1 00:04:16.565 }, 00:04:16.565 { 00:04:16.565 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.565 "dma_device_type": 2 00:04:16.565 } 00:04:16.565 ], 00:04:16.565 "driver_specific": { 00:04:16.565 "passthru": { 00:04:16.565 "name": "Passthru0", 00:04:16.565 "base_bdev_name": "Malloc0" 00:04:16.565 } 00:04:16.565 } 00:04:16.565 } 00:04:16.565 ]' 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc0 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:16.565 ************************************ 00:04:16.565 END TEST rpc_integrity 00:04:16.565 ************************************ 00:04:16.565 01:59:41 rpc.rpc_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:16.565 00:04:16.565 real 0m0.227s 00:04:16.565 user 0m0.124s 00:04:16.565 sys 0m0.032s 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 01:59:41 rpc -- rpc/rpc.sh@74 -- # run_test rpc_plugins rpc_plugins 00:04:16.565 01:59:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.565 01:59:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.565 01:59:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 ************************************ 00:04:16.565 START TEST rpc_plugins 00:04:16.565 ************************************ 00:04:16.565 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@1129 -- # rpc_plugins 00:04:16.565 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # rpc_cmd --plugin rpc_plugin create_malloc 00:04:16.565 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.565 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.565 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@30 -- # malloc=Malloc1 00:04:16.565 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # rpc_cmd bdev_get_bdevs 00:04:16.565 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.565 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.824 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@31 -- # bdevs='[ 00:04:16.824 { 00:04:16.824 "name": "Malloc1", 00:04:16.824 "aliases": [ 00:04:16.824 "9da59cb2-24fe-4927-95f1-e3be0b1fa723" 00:04:16.824 ], 00:04:16.824 "product_name": "Malloc disk", 00:04:16.824 "block_size": 4096, 00:04:16.824 "num_blocks": 256, 00:04:16.824 "uuid": "9da59cb2-24fe-4927-95f1-e3be0b1fa723", 00:04:16.824 "assigned_rate_limits": { 00:04:16.824 "rw_ios_per_sec": 0, 00:04:16.824 "rw_mbytes_per_sec": 0, 00:04:16.824 "r_mbytes_per_sec": 0, 00:04:16.824 "w_mbytes_per_sec": 0 00:04:16.824 }, 00:04:16.824 "claimed": false, 00:04:16.824 "zoned": false, 00:04:16.824 "supported_io_types": { 00:04:16.824 "read": true, 00:04:16.824 "write": true, 00:04:16.824 "unmap": true, 00:04:16.824 "flush": true, 00:04:16.824 "reset": true, 00:04:16.824 "nvme_admin": false, 00:04:16.824 "nvme_io": false, 00:04:16.824 "nvme_io_md": false, 00:04:16.824 "write_zeroes": true, 00:04:16.824 "zcopy": true, 00:04:16.824 "get_zone_info": false, 00:04:16.824 "zone_management": false, 00:04:16.824 "zone_append": false, 00:04:16.824 "compare": false, 00:04:16.824 "compare_and_write": false, 00:04:16.824 "abort": true, 00:04:16.824 "seek_hole": false, 00:04:16.824 "seek_data": false, 00:04:16.824 "copy": true, 00:04:16.824 "nvme_iov_md": false 00:04:16.824 }, 00:04:16.824 "memory_domains": [ 00:04:16.824 { 00:04:16.824 "dma_device_id": "system", 00:04:16.824 "dma_device_type": 1 00:04:16.824 }, 00:04:16.824 { 00:04:16.824 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:16.824 "dma_device_type": 2 00:04:16.824 } 00:04:16.824 ], 00:04:16.824 "driver_specific": {} 00:04:16.824 } 00:04:16.824 ]' 00:04:16.824 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # jq length 00:04:16.824 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@32 -- # '[' 1 == 1 ']' 00:04:16.824 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@34 -- # rpc_cmd --plugin rpc_plugin delete_malloc Malloc1 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.824 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # rpc_cmd bdev_get_bdevs 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.824 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@35 -- # bdevs='[]' 00:04:16.824 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # jq length 00:04:16.824 ************************************ 00:04:16.824 END TEST rpc_plugins 00:04:16.824 ************************************ 00:04:16.824 01:59:41 rpc.rpc_plugins -- rpc/rpc.sh@36 -- # '[' 0 == 0 ']' 00:04:16.824 00:04:16.824 real 0m0.112s 00:04:16.824 user 0m0.063s 00:04:16.824 sys 0m0.018s 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:16.824 01:59:41 rpc.rpc_plugins -- common/autotest_common.sh@10 -- # set +x 00:04:16.824 01:59:41 rpc -- rpc/rpc.sh@75 -- # run_test rpc_trace_cmd_test rpc_trace_cmd_test 00:04:16.824 01:59:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:16.824 01:59:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:16.824 01:59:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:16.824 ************************************ 00:04:16.824 START TEST rpc_trace_cmd_test 00:04:16.824 ************************************ 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1129 -- # rpc_trace_cmd_test 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@40 -- # local info 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # rpc_cmd trace_get_info 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@42 -- # info='{ 00:04:16.824 "tpoint_shm_path": "/dev/shm/spdk_tgt_trace.pid58954", 00:04:16.824 "tpoint_group_mask": "0x8", 00:04:16.824 "iscsi_conn": { 00:04:16.824 "mask": "0x2", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "scsi": { 00:04:16.824 "mask": "0x4", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "bdev": { 00:04:16.824 "mask": "0x8", 00:04:16.824 "tpoint_mask": "0xffffffffffffffff" 00:04:16.824 }, 00:04:16.824 "nvmf_rdma": { 00:04:16.824 "mask": "0x10", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "nvmf_tcp": { 00:04:16.824 "mask": "0x20", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "ftl": { 00:04:16.824 "mask": "0x40", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "blobfs": { 00:04:16.824 "mask": "0x80", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "dsa": { 00:04:16.824 "mask": "0x200", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "thread": { 00:04:16.824 "mask": "0x400", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "nvme_pcie": { 00:04:16.824 "mask": "0x800", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "iaa": { 00:04:16.824 "mask": "0x1000", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "nvme_tcp": { 00:04:16.824 "mask": "0x2000", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "bdev_nvme": { 00:04:16.824 "mask": "0x4000", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "sock": { 00:04:16.824 "mask": "0x8000", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "blob": { 00:04:16.824 "mask": "0x10000", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "bdev_raid": { 00:04:16.824 "mask": "0x20000", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 }, 00:04:16.824 "scheduler": { 00:04:16.824 "mask": "0x40000", 00:04:16.824 "tpoint_mask": "0x0" 00:04:16.824 } 00:04:16.824 }' 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # jq length 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@43 -- # '[' 19 -gt 2 ']' 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # jq 'has("tpoint_group_mask")' 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@44 -- # '[' true = true ']' 00:04:16.824 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # jq 'has("tpoint_shm_path")' 00:04:16.825 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@45 -- # '[' true = true ']' 00:04:16.825 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # jq 'has("bdev")' 00:04:17.083 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@46 -- # '[' true = true ']' 00:04:17.083 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # jq -r .bdev.tpoint_mask 00:04:17.083 01:59:41 rpc.rpc_trace_cmd_test -- rpc/rpc.sh@47 -- # '[' 0xffffffffffffffff '!=' 0x0 ']' 00:04:17.083 00:04:17.083 real 0m0.169s 00:04:17.083 user 0m0.148s 00:04:17.083 sys 0m0.012s 00:04:17.083 01:59:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.083 01:59:41 rpc.rpc_trace_cmd_test -- common/autotest_common.sh@10 -- # set +x 00:04:17.083 ************************************ 00:04:17.083 END TEST rpc_trace_cmd_test 00:04:17.083 ************************************ 00:04:17.083 01:59:41 rpc -- rpc/rpc.sh@76 -- # [[ 0 -eq 1 ]] 00:04:17.083 01:59:41 rpc -- rpc/rpc.sh@80 -- # rpc=rpc_cmd 00:04:17.084 01:59:41 rpc -- rpc/rpc.sh@81 -- # run_test rpc_daemon_integrity rpc_integrity 00:04:17.084 01:59:41 rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:17.084 01:59:41 rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:17.084 01:59:41 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:17.084 ************************************ 00:04:17.084 START TEST rpc_daemon_integrity 00:04:17.084 ************************************ 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1129 -- # rpc_integrity 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # rpc_cmd bdev_get_bdevs 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@12 -- # bdevs='[]' 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # jq length 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@13 -- # '[' 0 == 0 ']' 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # rpc_cmd bdev_malloc_create 8 512 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@15 -- # malloc=Malloc2 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # rpc_cmd bdev_get_bdevs 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@16 -- # bdevs='[ 00:04:17.084 { 00:04:17.084 "name": "Malloc2", 00:04:17.084 "aliases": [ 00:04:17.084 "743f8b76-d3ef-4367-9f72-1454cdc311ad" 00:04:17.084 ], 00:04:17.084 "product_name": "Malloc disk", 00:04:17.084 "block_size": 512, 00:04:17.084 "num_blocks": 16384, 00:04:17.084 "uuid": "743f8b76-d3ef-4367-9f72-1454cdc311ad", 00:04:17.084 "assigned_rate_limits": { 00:04:17.084 "rw_ios_per_sec": 0, 00:04:17.084 "rw_mbytes_per_sec": 0, 00:04:17.084 "r_mbytes_per_sec": 0, 00:04:17.084 "w_mbytes_per_sec": 0 00:04:17.084 }, 00:04:17.084 "claimed": false, 00:04:17.084 "zoned": false, 00:04:17.084 "supported_io_types": { 00:04:17.084 "read": true, 00:04:17.084 "write": true, 00:04:17.084 "unmap": true, 00:04:17.084 "flush": true, 00:04:17.084 "reset": true, 00:04:17.084 "nvme_admin": false, 00:04:17.084 "nvme_io": false, 00:04:17.084 "nvme_io_md": false, 00:04:17.084 "write_zeroes": true, 00:04:17.084 "zcopy": true, 00:04:17.084 "get_zone_info": false, 00:04:17.084 "zone_management": false, 00:04:17.084 "zone_append": false, 00:04:17.084 "compare": false, 00:04:17.084 "compare_and_write": false, 00:04:17.084 "abort": true, 00:04:17.084 "seek_hole": false, 00:04:17.084 "seek_data": false, 00:04:17.084 "copy": true, 00:04:17.084 "nvme_iov_md": false 00:04:17.084 }, 00:04:17.084 "memory_domains": [ 00:04:17.084 { 00:04:17.084 "dma_device_id": "system", 00:04:17.084 "dma_device_type": 1 00:04:17.084 }, 00:04:17.084 { 00:04:17.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.084 "dma_device_type": 2 00:04:17.084 } 00:04:17.084 ], 00:04:17.084 "driver_specific": {} 00:04:17.084 } 00:04:17.084 ]' 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # jq length 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@17 -- # '[' 1 == 1 ']' 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@19 -- # rpc_cmd bdev_passthru_create -b Malloc2 -p Passthru0 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.084 [2024-12-15 01:59:41.768227] vbdev_passthru.c: 608:vbdev_passthru_register: *NOTICE*: Match on Malloc2 00:04:17.084 [2024-12-15 01:59:41.768271] vbdev_passthru.c: 636:vbdev_passthru_register: *NOTICE*: base bdev opened 00:04:17.084 [2024-12-15 01:59:41.768284] vbdev_passthru.c: 682:vbdev_passthru_register: *NOTICE*: io_device created at: 0x0x616000009680 00:04:17.084 [2024-12-15 01:59:41.768293] vbdev_passthru.c: 697:vbdev_passthru_register: *NOTICE*: bdev claimed 00:04:17.084 [2024-12-15 01:59:41.769916] vbdev_passthru.c: 710:vbdev_passthru_register: *NOTICE*: pt_bdev registered 00:04:17.084 [2024-12-15 01:59:41.769949] vbdev_passthru.c: 711:vbdev_passthru_register: *NOTICE*: created pt_bdev for: Passthru0 00:04:17.084 Passthru0 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # rpc_cmd bdev_get_bdevs 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@20 -- # bdevs='[ 00:04:17.084 { 00:04:17.084 "name": "Malloc2", 00:04:17.084 "aliases": [ 00:04:17.084 "743f8b76-d3ef-4367-9f72-1454cdc311ad" 00:04:17.084 ], 00:04:17.084 "product_name": "Malloc disk", 00:04:17.084 "block_size": 512, 00:04:17.084 "num_blocks": 16384, 00:04:17.084 "uuid": "743f8b76-d3ef-4367-9f72-1454cdc311ad", 00:04:17.084 "assigned_rate_limits": { 00:04:17.084 "rw_ios_per_sec": 0, 00:04:17.084 "rw_mbytes_per_sec": 0, 00:04:17.084 "r_mbytes_per_sec": 0, 00:04:17.084 "w_mbytes_per_sec": 0 00:04:17.084 }, 00:04:17.084 "claimed": true, 00:04:17.084 "claim_type": "exclusive_write", 00:04:17.084 "zoned": false, 00:04:17.084 "supported_io_types": { 00:04:17.084 "read": true, 00:04:17.084 "write": true, 00:04:17.084 "unmap": true, 00:04:17.084 "flush": true, 00:04:17.084 "reset": true, 00:04:17.084 "nvme_admin": false, 00:04:17.084 "nvme_io": false, 00:04:17.084 "nvme_io_md": false, 00:04:17.084 "write_zeroes": true, 00:04:17.084 "zcopy": true, 00:04:17.084 "get_zone_info": false, 00:04:17.084 "zone_management": false, 00:04:17.084 "zone_append": false, 00:04:17.084 "compare": false, 00:04:17.084 "compare_and_write": false, 00:04:17.084 "abort": true, 00:04:17.084 "seek_hole": false, 00:04:17.084 "seek_data": false, 00:04:17.084 "copy": true, 00:04:17.084 "nvme_iov_md": false 00:04:17.084 }, 00:04:17.084 "memory_domains": [ 00:04:17.084 { 00:04:17.084 "dma_device_id": "system", 00:04:17.084 "dma_device_type": 1 00:04:17.084 }, 00:04:17.084 { 00:04:17.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.084 "dma_device_type": 2 00:04:17.084 } 00:04:17.084 ], 00:04:17.084 "driver_specific": {} 00:04:17.084 }, 00:04:17.084 { 00:04:17.084 "name": "Passthru0", 00:04:17.084 "aliases": [ 00:04:17.084 "1c39e8c9-b175-5eca-88d2-3bcf6829a9d9" 00:04:17.084 ], 00:04:17.084 "product_name": "passthru", 00:04:17.084 "block_size": 512, 00:04:17.084 "num_blocks": 16384, 00:04:17.084 "uuid": "1c39e8c9-b175-5eca-88d2-3bcf6829a9d9", 00:04:17.084 "assigned_rate_limits": { 00:04:17.084 "rw_ios_per_sec": 0, 00:04:17.084 "rw_mbytes_per_sec": 0, 00:04:17.084 "r_mbytes_per_sec": 0, 00:04:17.084 "w_mbytes_per_sec": 0 00:04:17.084 }, 00:04:17.084 "claimed": false, 00:04:17.084 "zoned": false, 00:04:17.084 "supported_io_types": { 00:04:17.084 "read": true, 00:04:17.084 "write": true, 00:04:17.084 "unmap": true, 00:04:17.084 "flush": true, 00:04:17.084 "reset": true, 00:04:17.084 "nvme_admin": false, 00:04:17.084 "nvme_io": false, 00:04:17.084 "nvme_io_md": false, 00:04:17.084 "write_zeroes": true, 00:04:17.084 "zcopy": true, 00:04:17.084 "get_zone_info": false, 00:04:17.084 "zone_management": false, 00:04:17.084 "zone_append": false, 00:04:17.084 "compare": false, 00:04:17.084 "compare_and_write": false, 00:04:17.084 "abort": true, 00:04:17.084 "seek_hole": false, 00:04:17.084 "seek_data": false, 00:04:17.084 "copy": true, 00:04:17.084 "nvme_iov_md": false 00:04:17.084 }, 00:04:17.084 "memory_domains": [ 00:04:17.084 { 00:04:17.084 "dma_device_id": "system", 00:04:17.084 "dma_device_type": 1 00:04:17.084 }, 00:04:17.084 { 00:04:17.084 "dma_device_id": "SPDK_ACCEL_DMA_DEVICE", 00:04:17.084 "dma_device_type": 2 00:04:17.084 } 00:04:17.084 ], 00:04:17.084 "driver_specific": { 00:04:17.084 "passthru": { 00:04:17.084 "name": "Passthru0", 00:04:17.084 "base_bdev_name": "Malloc2" 00:04:17.084 } 00:04:17.084 } 00:04:17.084 } 00:04:17.084 ]' 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # jq length 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@21 -- # '[' 2 == 2 ']' 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@23 -- # rpc_cmd bdev_passthru_delete Passthru0 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@24 -- # rpc_cmd bdev_malloc_delete Malloc2 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # rpc_cmd bdev_get_bdevs 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:17.084 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.343 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:17.343 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@25 -- # bdevs='[]' 00:04:17.343 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # jq length 00:04:17.343 01:59:41 rpc.rpc_daemon_integrity -- rpc/rpc.sh@26 -- # '[' 0 == 0 ']' 00:04:17.343 00:04:17.343 real 0m0.220s 00:04:17.343 user 0m0.119s 00:04:17.343 sys 0m0.033s 00:04:17.343 ************************************ 00:04:17.343 END TEST rpc_daemon_integrity 00:04:17.343 ************************************ 00:04:17.343 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:17.343 01:59:41 rpc.rpc_daemon_integrity -- common/autotest_common.sh@10 -- # set +x 00:04:17.343 01:59:41 rpc -- rpc/rpc.sh@83 -- # trap - SIGINT SIGTERM EXIT 00:04:17.343 01:59:41 rpc -- rpc/rpc.sh@84 -- # killprocess 58954 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@954 -- # '[' -z 58954 ']' 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@958 -- # kill -0 58954 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@959 -- # uname 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 58954 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:17.343 killing process with pid 58954 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 58954' 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@973 -- # kill 58954 00:04:17.343 01:59:41 rpc -- common/autotest_common.sh@978 -- # wait 58954 00:04:18.720 00:04:18.720 real 0m3.114s 00:04:18.720 user 0m3.541s 00:04:18.720 sys 0m0.582s 00:04:18.720 ************************************ 00:04:18.720 END TEST rpc 00:04:18.720 ************************************ 00:04:18.720 01:59:43 rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:18.720 01:59:43 rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.720 01:59:43 -- spdk/autotest.sh@157 -- # run_test skip_rpc /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.720 01:59:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.720 01:59:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.720 01:59:43 -- common/autotest_common.sh@10 -- # set +x 00:04:18.720 ************************************ 00:04:18.720 START TEST skip_rpc 00:04:18.720 ************************************ 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc/skip_rpc.sh 00:04:18.720 * Looking for test storage... 00:04:18.720 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@345 -- # : 1 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@353 -- # local d=1 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@355 -- # echo 1 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@353 -- # local d=2 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@355 -- # echo 2 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:18.720 01:59:43 skip_rpc -- scripts/common.sh@368 -- # return 0 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:18.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.720 --rc genhtml_branch_coverage=1 00:04:18.720 --rc genhtml_function_coverage=1 00:04:18.720 --rc genhtml_legend=1 00:04:18.720 --rc geninfo_all_blocks=1 00:04:18.720 --rc geninfo_unexecuted_blocks=1 00:04:18.720 00:04:18.720 ' 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:18.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.720 --rc genhtml_branch_coverage=1 00:04:18.720 --rc genhtml_function_coverage=1 00:04:18.720 --rc genhtml_legend=1 00:04:18.720 --rc geninfo_all_blocks=1 00:04:18.720 --rc geninfo_unexecuted_blocks=1 00:04:18.720 00:04:18.720 ' 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:18.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.720 --rc genhtml_branch_coverage=1 00:04:18.720 --rc genhtml_function_coverage=1 00:04:18.720 --rc genhtml_legend=1 00:04:18.720 --rc geninfo_all_blocks=1 00:04:18.720 --rc geninfo_unexecuted_blocks=1 00:04:18.720 00:04:18.720 ' 00:04:18.720 01:59:43 skip_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:18.720 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:18.720 --rc genhtml_branch_coverage=1 00:04:18.721 --rc genhtml_function_coverage=1 00:04:18.721 --rc genhtml_legend=1 00:04:18.721 --rc geninfo_all_blocks=1 00:04:18.721 --rc geninfo_unexecuted_blocks=1 00:04:18.721 00:04:18.721 ' 00:04:18.721 01:59:43 skip_rpc -- rpc/skip_rpc.sh@11 -- # CONFIG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:18.721 01:59:43 skip_rpc -- rpc/skip_rpc.sh@12 -- # LOG_PATH=/home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:18.721 01:59:43 skip_rpc -- rpc/skip_rpc.sh@73 -- # run_test skip_rpc test_skip_rpc 00:04:18.721 01:59:43 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:18.721 01:59:43 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:18.721 01:59:43 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:18.721 ************************************ 00:04:18.721 START TEST skip_rpc 00:04:18.721 ************************************ 00:04:18.721 01:59:43 skip_rpc.skip_rpc -- common/autotest_common.sh@1129 -- # test_skip_rpc 00:04:18.721 01:59:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@16 -- # local spdk_pid=59161 00:04:18.721 01:59:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@18 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:18.721 01:59:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@19 -- # sleep 5 00:04:18.721 01:59:43 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 00:04:18.721 [2024-12-15 01:59:43.387701] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:18.721 [2024-12-15 01:59:43.387843] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59161 ] 00:04:18.979 [2024-12-15 01:59:43.545470] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:18.979 [2024-12-15 01:59:43.629759] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@21 -- # NOT rpc_cmd spdk_get_version 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@652 -- # local es=0 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd spdk_get_version 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # rpc_cmd spdk_get_version 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@655 -- # es=1 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@22 -- # trap - SIGINT SIGTERM EXIT 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- rpc/skip_rpc.sh@23 -- # killprocess 59161 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@954 -- # '[' -z 59161 ']' 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@958 -- # kill -0 59161 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # uname 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59161 00:04:24.243 killing process with pid 59161 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59161' 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@973 -- # kill 59161 00:04:24.243 01:59:48 skip_rpc.skip_rpc -- common/autotest_common.sh@978 -- # wait 59161 00:04:24.810 00:04:24.810 real 0m6.190s 00:04:24.810 user 0m5.822s 00:04:24.810 sys 0m0.266s 00:04:24.810 01:59:49 skip_rpc.skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:24.810 ************************************ 00:04:24.810 END TEST skip_rpc 00:04:24.810 ************************************ 00:04:24.810 01:59:49 skip_rpc.skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.810 01:59:49 skip_rpc -- rpc/skip_rpc.sh@74 -- # run_test skip_rpc_with_json test_skip_rpc_with_json 00:04:24.810 01:59:49 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:24.810 01:59:49 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:24.810 01:59:49 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:24.810 ************************************ 00:04:24.810 START TEST skip_rpc_with_json 00:04:24.810 ************************************ 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_json 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@44 -- # gen_json_config 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@28 -- # local spdk_pid=59254 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@30 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@31 -- # waitforlisten 59254 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@835 -- # '[' -z 59254 ']' 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:24.810 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@27 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:24.810 01:59:49 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.068 [2024-12-15 01:59:49.628322] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:25.068 [2024-12-15 01:59:49.628435] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59254 ] 00:04:25.068 [2024-12-15 01:59:49.782855] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:25.327 [2024-12-15 01:59:49.856864] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@868 -- # return 0 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_get_transports --trtype tcp 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.895 [2024-12-15 01:59:50.421548] nvmf_rpc.c:2707:rpc_nvmf_get_transports: *ERROR*: transport 'tcp' does not exist 00:04:25.895 request: 00:04:25.895 { 00:04:25.895 "trtype": "tcp", 00:04:25.895 "method": "nvmf_get_transports", 00:04:25.895 "req_id": 1 00:04:25.895 } 00:04:25.895 Got JSON-RPC error response 00:04:25.895 response: 00:04:25.895 { 00:04:25.895 "code": -19, 00:04:25.895 "message": "No such device" 00:04:25.895 } 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@34 -- # rpc_cmd nvmf_create_transport -t tcp 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.895 [2024-12-15 01:59:50.429633] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@36 -- # rpc_cmd save_config 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:25.895 01:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@37 -- # cat /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:25.895 { 00:04:25.895 "subsystems": [ 00:04:25.895 { 00:04:25.895 "subsystem": "fsdev", 00:04:25.895 "config": [ 00:04:25.895 { 00:04:25.895 "method": "fsdev_set_opts", 00:04:25.895 "params": { 00:04:25.895 "fsdev_io_pool_size": 65535, 00:04:25.895 "fsdev_io_cache_size": 256 00:04:25.895 } 00:04:25.895 } 00:04:25.895 ] 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "subsystem": "keyring", 00:04:25.895 "config": [] 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "subsystem": "iobuf", 00:04:25.895 "config": [ 00:04:25.895 { 00:04:25.895 "method": "iobuf_set_options", 00:04:25.895 "params": { 00:04:25.895 "small_pool_count": 8192, 00:04:25.895 "large_pool_count": 1024, 00:04:25.895 "small_bufsize": 8192, 00:04:25.895 "large_bufsize": 135168, 00:04:25.895 "enable_numa": false 00:04:25.895 } 00:04:25.895 } 00:04:25.895 ] 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "subsystem": "sock", 00:04:25.895 "config": [ 00:04:25.895 { 00:04:25.895 "method": "sock_set_default_impl", 00:04:25.895 "params": { 00:04:25.895 "impl_name": "posix" 00:04:25.895 } 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "method": "sock_impl_set_options", 00:04:25.895 "params": { 00:04:25.895 "impl_name": "ssl", 00:04:25.895 "recv_buf_size": 4096, 00:04:25.895 "send_buf_size": 4096, 00:04:25.895 "enable_recv_pipe": true, 00:04:25.895 "enable_quickack": false, 00:04:25.895 "enable_placement_id": 0, 00:04:25.895 "enable_zerocopy_send_server": true, 00:04:25.895 "enable_zerocopy_send_client": false, 00:04:25.895 "zerocopy_threshold": 0, 00:04:25.895 "tls_version": 0, 00:04:25.895 "enable_ktls": false 00:04:25.895 } 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "method": "sock_impl_set_options", 00:04:25.895 "params": { 00:04:25.895 "impl_name": "posix", 00:04:25.895 "recv_buf_size": 2097152, 00:04:25.895 "send_buf_size": 2097152, 00:04:25.895 "enable_recv_pipe": true, 00:04:25.895 "enable_quickack": false, 00:04:25.895 "enable_placement_id": 0, 00:04:25.895 "enable_zerocopy_send_server": true, 00:04:25.895 "enable_zerocopy_send_client": false, 00:04:25.895 "zerocopy_threshold": 0, 00:04:25.895 "tls_version": 0, 00:04:25.895 "enable_ktls": false 00:04:25.895 } 00:04:25.895 } 00:04:25.895 ] 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "subsystem": "vmd", 00:04:25.895 "config": [] 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "subsystem": "accel", 00:04:25.895 "config": [ 00:04:25.895 { 00:04:25.895 "method": "accel_set_options", 00:04:25.895 "params": { 00:04:25.895 "small_cache_size": 128, 00:04:25.895 "large_cache_size": 16, 00:04:25.895 "task_count": 2048, 00:04:25.895 "sequence_count": 2048, 00:04:25.895 "buf_count": 2048 00:04:25.895 } 00:04:25.895 } 00:04:25.895 ] 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "subsystem": "bdev", 00:04:25.895 "config": [ 00:04:25.895 { 00:04:25.895 "method": "bdev_set_options", 00:04:25.895 "params": { 00:04:25.895 "bdev_io_pool_size": 65535, 00:04:25.895 "bdev_io_cache_size": 256, 00:04:25.895 "bdev_auto_examine": true, 00:04:25.895 "iobuf_small_cache_size": 128, 00:04:25.895 "iobuf_large_cache_size": 16 00:04:25.895 } 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "method": "bdev_raid_set_options", 00:04:25.895 "params": { 00:04:25.895 "process_window_size_kb": 1024, 00:04:25.895 "process_max_bandwidth_mb_sec": 0 00:04:25.895 } 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "method": "bdev_iscsi_set_options", 00:04:25.895 "params": { 00:04:25.895 "timeout_sec": 30 00:04:25.895 } 00:04:25.895 }, 00:04:25.895 { 00:04:25.895 "method": "bdev_nvme_set_options", 00:04:25.896 "params": { 00:04:25.896 "action_on_timeout": "none", 00:04:25.896 "timeout_us": 0, 00:04:25.896 "timeout_admin_us": 0, 00:04:25.896 "keep_alive_timeout_ms": 10000, 00:04:25.896 "arbitration_burst": 0, 00:04:25.896 "low_priority_weight": 0, 00:04:25.896 "medium_priority_weight": 0, 00:04:25.896 "high_priority_weight": 0, 00:04:25.896 "nvme_adminq_poll_period_us": 10000, 00:04:25.896 "nvme_ioq_poll_period_us": 0, 00:04:25.896 "io_queue_requests": 0, 00:04:25.896 "delay_cmd_submit": true, 00:04:25.896 "transport_retry_count": 4, 00:04:25.896 "bdev_retry_count": 3, 00:04:25.896 "transport_ack_timeout": 0, 00:04:25.896 "ctrlr_loss_timeout_sec": 0, 00:04:25.896 "reconnect_delay_sec": 0, 00:04:25.896 "fast_io_fail_timeout_sec": 0, 00:04:25.896 "disable_auto_failback": false, 00:04:25.896 "generate_uuids": false, 00:04:25.896 "transport_tos": 0, 00:04:25.896 "nvme_error_stat": false, 00:04:25.896 "rdma_srq_size": 0, 00:04:25.896 "io_path_stat": false, 00:04:25.896 "allow_accel_sequence": false, 00:04:25.896 "rdma_max_cq_size": 0, 00:04:25.896 "rdma_cm_event_timeout_ms": 0, 00:04:25.896 "dhchap_digests": [ 00:04:25.896 "sha256", 00:04:25.896 "sha384", 00:04:25.896 "sha512" 00:04:25.896 ], 00:04:25.896 "dhchap_dhgroups": [ 00:04:25.896 "null", 00:04:25.896 "ffdhe2048", 00:04:25.896 "ffdhe3072", 00:04:25.896 "ffdhe4096", 00:04:25.896 "ffdhe6144", 00:04:25.896 "ffdhe8192" 00:04:25.896 ], 00:04:25.896 "rdma_umr_per_io": false 00:04:25.896 } 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "method": "bdev_nvme_set_hotplug", 00:04:25.896 "params": { 00:04:25.896 "period_us": 100000, 00:04:25.896 "enable": false 00:04:25.896 } 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "method": "bdev_wait_for_examine" 00:04:25.896 } 00:04:25.896 ] 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "subsystem": "scsi", 00:04:25.896 "config": null 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "subsystem": "scheduler", 00:04:25.896 "config": [ 00:04:25.896 { 00:04:25.896 "method": "framework_set_scheduler", 00:04:25.896 "params": { 00:04:25.896 "name": "static" 00:04:25.896 } 00:04:25.896 } 00:04:25.896 ] 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "subsystem": "vhost_scsi", 00:04:25.896 "config": [] 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "subsystem": "vhost_blk", 00:04:25.896 "config": [] 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "subsystem": "ublk", 00:04:25.896 "config": [] 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "subsystem": "nbd", 00:04:25.896 "config": [] 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "subsystem": "nvmf", 00:04:25.896 "config": [ 00:04:25.896 { 00:04:25.896 "method": "nvmf_set_config", 00:04:25.896 "params": { 00:04:25.896 "discovery_filter": "match_any", 00:04:25.896 "admin_cmd_passthru": { 00:04:25.896 "identify_ctrlr": false 00:04:25.896 }, 00:04:25.896 "dhchap_digests": [ 00:04:25.896 "sha256", 00:04:25.896 "sha384", 00:04:25.896 "sha512" 00:04:25.896 ], 00:04:25.896 "dhchap_dhgroups": [ 00:04:25.896 "null", 00:04:25.896 "ffdhe2048", 00:04:25.896 "ffdhe3072", 00:04:25.896 "ffdhe4096", 00:04:25.896 "ffdhe6144", 00:04:25.896 "ffdhe8192" 00:04:25.896 ] 00:04:25.896 } 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "method": "nvmf_set_max_subsystems", 00:04:25.896 "params": { 00:04:25.896 "max_subsystems": 1024 00:04:25.896 } 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "method": "nvmf_set_crdt", 00:04:25.896 "params": { 00:04:25.896 "crdt1": 0, 00:04:25.896 "crdt2": 0, 00:04:25.896 "crdt3": 0 00:04:25.896 } 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "method": "nvmf_create_transport", 00:04:25.896 "params": { 00:04:25.896 "trtype": "TCP", 00:04:25.896 "max_queue_depth": 128, 00:04:25.896 "max_io_qpairs_per_ctrlr": 127, 00:04:25.896 "in_capsule_data_size": 4096, 00:04:25.896 "max_io_size": 131072, 00:04:25.896 "io_unit_size": 131072, 00:04:25.896 "max_aq_depth": 128, 00:04:25.896 "num_shared_buffers": 511, 00:04:25.896 "buf_cache_size": 4294967295, 00:04:25.896 "dif_insert_or_strip": false, 00:04:25.896 "zcopy": false, 00:04:25.896 "c2h_success": true, 00:04:25.896 "sock_priority": 0, 00:04:25.896 "abort_timeout_sec": 1, 00:04:25.896 "ack_timeout": 0, 00:04:25.896 "data_wr_pool_size": 0 00:04:25.896 } 00:04:25.896 } 00:04:25.896 ] 00:04:25.896 }, 00:04:25.896 { 00:04:25.896 "subsystem": "iscsi", 00:04:25.896 "config": [ 00:04:25.896 { 00:04:25.896 "method": "iscsi_set_options", 00:04:25.896 "params": { 00:04:25.896 "node_base": "iqn.2016-06.io.spdk", 00:04:25.896 "max_sessions": 128, 00:04:25.896 "max_connections_per_session": 2, 00:04:25.896 "max_queue_depth": 64, 00:04:25.896 "default_time2wait": 2, 00:04:25.896 "default_time2retain": 20, 00:04:25.896 "first_burst_length": 8192, 00:04:25.896 "immediate_data": true, 00:04:25.896 "allow_duplicated_isid": false, 00:04:25.896 "error_recovery_level": 0, 00:04:25.896 "nop_timeout": 60, 00:04:25.896 "nop_in_interval": 30, 00:04:25.896 "disable_chap": false, 00:04:25.896 "require_chap": false, 00:04:25.896 "mutual_chap": false, 00:04:25.896 "chap_group": 0, 00:04:25.896 "max_large_datain_per_connection": 64, 00:04:25.896 "max_r2t_per_connection": 4, 00:04:25.896 "pdu_pool_size": 36864, 00:04:25.896 "immediate_data_pool_size": 16384, 00:04:25.896 "data_out_pool_size": 2048 00:04:25.896 } 00:04:25.896 } 00:04:25.896 ] 00:04:25.896 } 00:04:25.896 ] 00:04:25.896 } 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@40 -- # killprocess 59254 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59254 ']' 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59254 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59254 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:25.896 killing process with pid 59254 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59254' 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59254 00:04:25.896 01:59:50 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59254 00:04:27.301 01:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --json /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:27.301 01:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@47 -- # local spdk_pid=59292 00:04:27.301 01:59:51 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@48 -- # sleep 5 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@50 -- # killprocess 59292 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@954 -- # '[' -z 59292 ']' 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@958 -- # kill -0 59292 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # uname 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59292 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:32.564 killing process with pid 59292 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59292' 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@973 -- # kill 59292 00:04:32.564 01:59:56 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@978 -- # wait 59292 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@51 -- # grep -q 'TCP Transport Init' /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_json -- rpc/skip_rpc.sh@52 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/log.txt 00:04:33.501 00:04:33.501 real 0m8.393s 00:04:33.501 user 0m8.017s 00:04:33.501 sys 0m0.553s 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.501 ************************************ 00:04:33.501 END TEST skip_rpc_with_json 00:04:33.501 ************************************ 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_json -- common/autotest_common.sh@10 -- # set +x 00:04:33.501 01:59:57 skip_rpc -- rpc/skip_rpc.sh@75 -- # run_test skip_rpc_with_delay test_skip_rpc_with_delay 00:04:33.501 01:59:57 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.501 01:59:57 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.501 01:59:57 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.501 ************************************ 00:04:33.501 START TEST skip_rpc_with_delay 00:04:33.501 ************************************ 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1129 -- # test_skip_rpc_with_delay 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- rpc/skip_rpc.sh@57 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@652 -- # local es=0 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:33.501 01:59:57 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --no-rpc-server -m 0x1 --wait-for-rpc 00:04:33.501 [2024-12-15 01:59:58.073984] app.c: 842:spdk_app_start: *ERROR*: Cannot use '--wait-for-rpc' if no RPC server is going to be started. 00:04:33.501 01:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@655 -- # es=1 00:04:33.501 01:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:33.501 01:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:04:33.502 01:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:33.502 00:04:33.502 real 0m0.130s 00:04:33.502 user 0m0.065s 00:04:33.502 sys 0m0.062s 00:04:33.502 01:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:33.502 ************************************ 00:04:33.502 END TEST skip_rpc_with_delay 00:04:33.502 01:59:58 skip_rpc.skip_rpc_with_delay -- common/autotest_common.sh@10 -- # set +x 00:04:33.502 ************************************ 00:04:33.502 01:59:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # uname 00:04:33.502 01:59:58 skip_rpc -- rpc/skip_rpc.sh@77 -- # '[' Linux '!=' FreeBSD ']' 00:04:33.502 01:59:58 skip_rpc -- rpc/skip_rpc.sh@78 -- # run_test exit_on_failed_rpc_init test_exit_on_failed_rpc_init 00:04:33.502 01:59:58 skip_rpc -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:33.502 01:59:58 skip_rpc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:33.502 01:59:58 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:33.502 ************************************ 00:04:33.502 START TEST exit_on_failed_rpc_init 00:04:33.502 ************************************ 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1129 -- # test_exit_on_failed_rpc_init 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@62 -- # local spdk_pid=59410 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@63 -- # waitforlisten 59410 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@835 -- # '[' -z 59410 ']' 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:33.502 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:33.502 01:59:58 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:33.761 [2024-12-15 01:59:58.263757] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:33.761 [2024-12-15 01:59:58.263905] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59410 ] 00:04:33.761 [2024-12-15 01:59:58.422524] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:33.761 [2024-12-15 01:59:58.510832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@868 -- # return 0 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@65 -- # trap 'killprocess $spdk_pid; exit 1' SIGINT SIGTERM EXIT 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@67 -- # NOT /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@652 -- # local es=0 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt ]] 00:04:34.346 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x2 00:04:34.603 [2024-12-15 01:59:59.165808] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:34.603 [2024-12-15 01:59:59.165922] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59428 ] 00:04:34.603 [2024-12-15 01:59:59.325065] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:34.860 [2024-12-15 01:59:59.418216] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:34.860 [2024-12-15 01:59:59.418280] rpc.c: 180:_spdk_rpc_listen: *ERROR*: RPC Unix domain socket path /var/tmp/spdk.sock in use. Specify another. 00:04:34.860 [2024-12-15 01:59:59.418292] rpc.c: 166:spdk_rpc_initialize: *ERROR*: Unable to start RPC service at /var/tmp/spdk.sock 00:04:34.860 [2024-12-15 01:59:59.418304] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@655 -- # es=234 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@664 -- # es=106 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@665 -- # case "$es" in 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@672 -- # es=1 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@69 -- # trap - SIGINT SIGTERM EXIT 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- rpc/skip_rpc.sh@70 -- # killprocess 59410 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@954 -- # '[' -z 59410 ']' 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@958 -- # kill -0 59410 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # uname 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59410 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:34.860 killing process with pid 59410 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59410' 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@973 -- # kill 59410 00:04:34.860 01:59:59 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@978 -- # wait 59410 00:04:36.235 00:04:36.235 real 0m2.615s 00:04:36.235 user 0m2.921s 00:04:36.235 sys 0m0.399s 00:04:36.235 ************************************ 00:04:36.235 END TEST exit_on_failed_rpc_init 00:04:36.235 ************************************ 00:04:36.235 02:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.235 02:00:00 skip_rpc.exit_on_failed_rpc_init -- common/autotest_common.sh@10 -- # set +x 00:04:36.235 02:00:00 skip_rpc -- rpc/skip_rpc.sh@81 -- # rm /home/vagrant/spdk_repo/spdk/test/rpc/config.json 00:04:36.235 00:04:36.235 real 0m17.691s 00:04:36.235 user 0m16.970s 00:04:36.235 sys 0m1.447s 00:04:36.235 02:00:00 skip_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.235 02:00:00 skip_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:36.235 ************************************ 00:04:36.235 END TEST skip_rpc 00:04:36.235 ************************************ 00:04:36.235 02:00:00 -- spdk/autotest.sh@158 -- # run_test rpc_client /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:36.235 02:00:00 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.235 02:00:00 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.235 02:00:00 -- common/autotest_common.sh@10 -- # set +x 00:04:36.235 ************************************ 00:04:36.235 START TEST rpc_client 00:04:36.235 ************************************ 00:04:36.235 02:00:00 rpc_client -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client.sh 00:04:36.235 * Looking for test storage... 00:04:36.235 * Found test storage at /home/vagrant/spdk_repo/spdk/test/rpc_client 00:04:36.235 02:00:00 rpc_client -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.235 02:00:00 rpc_client -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.235 02:00:00 rpc_client -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.494 02:00:01 rpc_client -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.494 02:00:01 rpc_client -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.494 02:00:01 rpc_client -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.494 02:00:01 rpc_client -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.494 02:00:01 rpc_client -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.494 02:00:01 rpc_client -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.494 02:00:01 rpc_client -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@344 -- # case "$op" in 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@345 -- # : 1 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@365 -- # decimal 1 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@353 -- # local d=1 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@355 -- # echo 1 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@366 -- # decimal 2 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@353 -- # local d=2 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@355 -- # echo 2 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.495 02:00:01 rpc_client -- scripts/common.sh@368 -- # return 0 00:04:36.495 02:00:01 rpc_client -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.495 02:00:01 rpc_client -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.495 --rc genhtml_branch_coverage=1 00:04:36.495 --rc genhtml_function_coverage=1 00:04:36.495 --rc genhtml_legend=1 00:04:36.495 --rc geninfo_all_blocks=1 00:04:36.495 --rc geninfo_unexecuted_blocks=1 00:04:36.495 00:04:36.495 ' 00:04:36.495 02:00:01 rpc_client -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.495 --rc genhtml_branch_coverage=1 00:04:36.495 --rc genhtml_function_coverage=1 00:04:36.495 --rc genhtml_legend=1 00:04:36.495 --rc geninfo_all_blocks=1 00:04:36.495 --rc geninfo_unexecuted_blocks=1 00:04:36.495 00:04:36.495 ' 00:04:36.495 02:00:01 rpc_client -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.495 --rc genhtml_branch_coverage=1 00:04:36.495 --rc genhtml_function_coverage=1 00:04:36.495 --rc genhtml_legend=1 00:04:36.495 --rc geninfo_all_blocks=1 00:04:36.495 --rc geninfo_unexecuted_blocks=1 00:04:36.495 00:04:36.495 ' 00:04:36.495 02:00:01 rpc_client -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.495 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.495 --rc genhtml_branch_coverage=1 00:04:36.495 --rc genhtml_function_coverage=1 00:04:36.495 --rc genhtml_legend=1 00:04:36.495 --rc geninfo_all_blocks=1 00:04:36.495 --rc geninfo_unexecuted_blocks=1 00:04:36.495 00:04:36.495 ' 00:04:36.495 02:00:01 rpc_client -- rpc_client/rpc_client.sh@10 -- # /home/vagrant/spdk_repo/spdk/test/rpc_client/rpc_client_test 00:04:36.495 OK 00:04:36.495 02:00:01 rpc_client -- rpc_client/rpc_client.sh@12 -- # trap - SIGINT SIGTERM EXIT 00:04:36.495 00:04:36.495 real 0m0.194s 00:04:36.495 user 0m0.102s 00:04:36.495 sys 0m0.097s 00:04:36.495 ************************************ 00:04:36.495 END TEST rpc_client 00:04:36.495 ************************************ 00:04:36.495 02:00:01 rpc_client -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.495 02:00:01 rpc_client -- common/autotest_common.sh@10 -- # set +x 00:04:36.495 02:00:01 -- spdk/autotest.sh@159 -- # run_test json_config /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:36.495 02:00:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.495 02:00:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.495 02:00:01 -- common/autotest_common.sh@10 -- # set +x 00:04:36.495 ************************************ 00:04:36.495 START TEST json_config 00:04:36.495 ************************************ 00:04:36.495 02:00:01 json_config -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config.sh 00:04:36.495 02:00:01 json_config -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.495 02:00:01 json_config -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.495 02:00:01 json_config -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.495 02:00:01 json_config -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.495 02:00:01 json_config -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.495 02:00:01 json_config -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.495 02:00:01 json_config -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.495 02:00:01 json_config -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.495 02:00:01 json_config -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.495 02:00:01 json_config -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.495 02:00:01 json_config -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.495 02:00:01 json_config -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.495 02:00:01 json_config -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.495 02:00:01 json_config -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.495 02:00:01 json_config -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.495 02:00:01 json_config -- scripts/common.sh@344 -- # case "$op" in 00:04:36.495 02:00:01 json_config -- scripts/common.sh@345 -- # : 1 00:04:36.495 02:00:01 json_config -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.495 02:00:01 json_config -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.495 02:00:01 json_config -- scripts/common.sh@365 -- # decimal 1 00:04:36.495 02:00:01 json_config -- scripts/common.sh@353 -- # local d=1 00:04:36.758 02:00:01 json_config -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.758 02:00:01 json_config -- scripts/common.sh@355 -- # echo 1 00:04:36.758 02:00:01 json_config -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.758 02:00:01 json_config -- scripts/common.sh@366 -- # decimal 2 00:04:36.758 02:00:01 json_config -- scripts/common.sh@353 -- # local d=2 00:04:36.758 02:00:01 json_config -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.758 02:00:01 json_config -- scripts/common.sh@355 -- # echo 2 00:04:36.758 02:00:01 json_config -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.758 02:00:01 json_config -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.758 02:00:01 json_config -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.758 02:00:01 json_config -- scripts/common.sh@368 -- # return 0 00:04:36.758 02:00:01 json_config -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.758 02:00:01 json_config -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.758 --rc genhtml_branch_coverage=1 00:04:36.758 --rc genhtml_function_coverage=1 00:04:36.758 --rc genhtml_legend=1 00:04:36.758 --rc geninfo_all_blocks=1 00:04:36.758 --rc geninfo_unexecuted_blocks=1 00:04:36.758 00:04:36.758 ' 00:04:36.758 02:00:01 json_config -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.758 --rc genhtml_branch_coverage=1 00:04:36.758 --rc genhtml_function_coverage=1 00:04:36.758 --rc genhtml_legend=1 00:04:36.758 --rc geninfo_all_blocks=1 00:04:36.758 --rc geninfo_unexecuted_blocks=1 00:04:36.758 00:04:36.758 ' 00:04:36.758 02:00:01 json_config -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.758 --rc genhtml_branch_coverage=1 00:04:36.758 --rc genhtml_function_coverage=1 00:04:36.758 --rc genhtml_legend=1 00:04:36.758 --rc geninfo_all_blocks=1 00:04:36.758 --rc geninfo_unexecuted_blocks=1 00:04:36.758 00:04:36.758 ' 00:04:36.758 02:00:01 json_config -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.758 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.758 --rc genhtml_branch_coverage=1 00:04:36.758 --rc genhtml_function_coverage=1 00:04:36.758 --rc genhtml_legend=1 00:04:36.758 --rc geninfo_all_blocks=1 00:04:36.758 --rc geninfo_unexecuted_blocks=1 00:04:36.758 00:04:36.758 ' 00:04:36.758 02:00:01 json_config -- json_config/json_config.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@7 -- # uname -s 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fb2ab7fd-1bf2-4a37-bf80-a2d36b143c94 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@18 -- # NVME_HOSTID=fb2ab7fd-1bf2-4a37-bf80-a2d36b143c94 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.758 02:00:01 json_config -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.758 02:00:01 json_config -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.758 02:00:01 json_config -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.758 02:00:01 json_config -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.758 02:00:01 json_config -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.758 02:00:01 json_config -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.758 02:00:01 json_config -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.758 02:00:01 json_config -- paths/export.sh@5 -- # export PATH 00:04:36.758 02:00:01 json_config -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@51 -- # : 0 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.758 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.758 02:00:01 json_config -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.758 02:00:01 json_config -- json_config/json_config.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.758 02:00:01 json_config -- json_config/json_config.sh@11 -- # [[ 0 -eq 1 ]] 00:04:36.758 02:00:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -ne 1 ]] 00:04:36.758 02:00:01 json_config -- json_config/json_config.sh@15 -- # [[ 0 -eq 1 ]] 00:04:36.758 WARNING: No tests are enabled so not running JSON configuration tests 00:04:36.758 02:00:01 json_config -- json_config/json_config.sh@26 -- # (( SPDK_TEST_BLOCKDEV + SPDK_TEST_ISCSI + SPDK_TEST_NVMF + SPDK_TEST_VHOST + SPDK_TEST_VHOST_INIT + SPDK_TEST_RBD == 0 )) 00:04:36.758 02:00:01 json_config -- json_config/json_config.sh@27 -- # echo 'WARNING: No tests are enabled so not running JSON configuration tests' 00:04:36.758 02:00:01 json_config -- json_config/json_config.sh@28 -- # exit 0 00:04:36.758 00:04:36.758 real 0m0.144s 00:04:36.758 user 0m0.082s 00:04:36.758 sys 0m0.065s 00:04:36.758 02:00:01 json_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:36.758 02:00:01 json_config -- common/autotest_common.sh@10 -- # set +x 00:04:36.758 ************************************ 00:04:36.758 END TEST json_config 00:04:36.758 ************************************ 00:04:36.758 02:00:01 -- spdk/autotest.sh@160 -- # run_test json_config_extra_key /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.758 02:00:01 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:36.758 02:00:01 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:36.758 02:00:01 -- common/autotest_common.sh@10 -- # set +x 00:04:36.758 ************************************ 00:04:36.758 START TEST json_config_extra_key 00:04:36.758 ************************************ 00:04:36.758 02:00:01 json_config_extra_key -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/json_config_extra_key.sh 00:04:36.758 02:00:01 json_config_extra_key -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:36.758 02:00:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:36.758 02:00:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lcov --version 00:04:36.758 02:00:01 json_config_extra_key -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:36.758 02:00:01 json_config_extra_key -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:36.758 02:00:01 json_config_extra_key -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@336 -- # IFS=.-: 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@336 -- # read -ra ver1 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@337 -- # IFS=.-: 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@337 -- # read -ra ver2 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@338 -- # local 'op=<' 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@340 -- # ver1_l=2 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@341 -- # ver2_l=1 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@344 -- # case "$op" in 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@345 -- # : 1 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@365 -- # decimal 1 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=1 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 1 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@365 -- # ver1[v]=1 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@366 -- # decimal 2 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@353 -- # local d=2 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@355 -- # echo 2 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@366 -- # ver2[v]=2 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@368 -- # return 0 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:36.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.759 --rc genhtml_branch_coverage=1 00:04:36.759 --rc genhtml_function_coverage=1 00:04:36.759 --rc genhtml_legend=1 00:04:36.759 --rc geninfo_all_blocks=1 00:04:36.759 --rc geninfo_unexecuted_blocks=1 00:04:36.759 00:04:36.759 ' 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:36.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.759 --rc genhtml_branch_coverage=1 00:04:36.759 --rc genhtml_function_coverage=1 00:04:36.759 --rc genhtml_legend=1 00:04:36.759 --rc geninfo_all_blocks=1 00:04:36.759 --rc geninfo_unexecuted_blocks=1 00:04:36.759 00:04:36.759 ' 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:36.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.759 --rc genhtml_branch_coverage=1 00:04:36.759 --rc genhtml_function_coverage=1 00:04:36.759 --rc genhtml_legend=1 00:04:36.759 --rc geninfo_all_blocks=1 00:04:36.759 --rc geninfo_unexecuted_blocks=1 00:04:36.759 00:04:36.759 ' 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:36.759 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:36.759 --rc genhtml_branch_coverage=1 00:04:36.759 --rc genhtml_function_coverage=1 00:04:36.759 --rc genhtml_legend=1 00:04:36.759 --rc geninfo_all_blocks=1 00:04:36.759 --rc geninfo_unexecuted_blocks=1 00:04:36.759 00:04:36.759 ' 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@7 -- # uname -s 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@7 -- # [[ Linux == FreeBSD ]] 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@9 -- # NVMF_PORT=4420 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@10 -- # NVMF_SECOND_PORT=4421 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@11 -- # NVMF_THIRD_PORT=4422 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@12 -- # NVMF_IP_PREFIX=192.168.100 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@13 -- # NVMF_IP_LEAST_ADDR=8 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@14 -- # NVMF_TCP_IP_ADDRESS=127.0.0.1 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@15 -- # NVMF_TRANSPORT_OPTS= 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@16 -- # NVMF_SERIAL=SPDKISFASTANDAWESOME 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@17 -- # nvme gen-hostnqn 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@17 -- # NVME_HOSTNQN=nqn.2014-08.org.nvmexpress:uuid:fb2ab7fd-1bf2-4a37-bf80-a2d36b143c94 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@18 -- # NVME_HOSTID=fb2ab7fd-1bf2-4a37-bf80-a2d36b143c94 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@19 -- # NVME_HOST=("--hostnqn=$NVME_HOSTNQN" "--hostid=$NVME_HOSTID") 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@20 -- # NVME_CONNECT='nvme connect' 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@21 -- # NET_TYPE=phy-fallback 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@22 -- # NVME_SUBNQN=nqn.2016-06.io.spdk:testnqn 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@49 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@15 -- # shopt -s extglob 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:04:36.759 02:00:01 json_config_extra_key -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:04:36.759 02:00:01 json_config_extra_key -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.759 02:00:01 json_config_extra_key -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.759 02:00:01 json_config_extra_key -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.759 02:00:01 json_config_extra_key -- paths/export.sh@5 -- # export PATH 00:04:36.759 02:00:01 json_config_extra_key -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@51 -- # : 0 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@52 -- # export NVMF_APP_SHM_ID 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@53 -- # build_nvmf_app_args 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@25 -- # '[' 0 -eq 1 ']' 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@29 -- # NVMF_APP+=(-i "$NVMF_APP_SHM_ID" -e 0xFFFF) 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@31 -- # NVMF_APP+=("${NO_HUGE[@]}") 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@33 -- # '[' '' -eq 1 ']' 00:04:36.759 /home/vagrant/spdk_repo/spdk/test/nvmf/common.sh: line 33: [: : integer expression expected 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@37 -- # '[' -n '' ']' 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@39 -- # '[' 0 -eq 1 ']' 00:04:36.759 02:00:01 json_config_extra_key -- nvmf/common.sh@55 -- # have_pci_nics=0 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/json_config/common.sh 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # app_pid=(['target']='') 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@17 -- # declare -A app_pid 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # app_socket=(['target']='/var/tmp/spdk_tgt.sock') 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@18 -- # declare -A app_socket 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # app_params=(['target']='-m 0x1 -s 1024') 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@19 -- # declare -A app_params 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # configs_path=(['target']='/home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json') 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@20 -- # declare -A configs_path 00:04:36.759 INFO: launching applications... 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@22 -- # trap 'on_error_exit "${FUNCNAME}" "${LINENO}"' ERR 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@24 -- # echo 'INFO: launching applications...' 00:04:36.759 02:00:01 json_config_extra_key -- json_config/json_config_extra_key.sh@25 -- # json_config_test_start_app target --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@9 -- # local app=target 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@10 -- # shift 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@12 -- # [[ -n 22 ]] 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@13 -- # [[ -z '' ]] 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@15 -- # local app_extra_params= 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@16 -- # [[ 0 -eq 1 ]] 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@22 -- # app_pid["$app"]=59616 00:04:36.759 Waiting for target to run... 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@24 -- # echo 'Waiting for target to run...' 00:04:36.759 02:00:01 json_config_extra_key -- json_config/common.sh@25 -- # waitforlisten 59616 /var/tmp/spdk_tgt.sock 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@835 -- # '[' -z 59616 ']' 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk_tgt.sock 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:36.759 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock... 00:04:36.759 02:00:01 json_config_extra_key -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk_tgt.sock...' 00:04:36.760 02:00:01 json_config_extra_key -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:36.760 02:00:01 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:36.760 02:00:01 json_config_extra_key -- json_config/common.sh@21 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -s 1024 -r /var/tmp/spdk_tgt.sock --json /home/vagrant/spdk_repo/spdk/test/json_config/extra_key.json 00:04:37.020 [2024-12-15 02:00:01.575374] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:37.020 [2024-12-15 02:00:01.575493] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 -m 1024 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59616 ] 00:04:37.281 [2024-12-15 02:00:01.891532] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:37.281 [2024-12-15 02:00:01.978364] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:37.853 02:00:02 json_config_extra_key -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:37.853 00:04:37.853 02:00:02 json_config_extra_key -- common/autotest_common.sh@868 -- # return 0 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@26 -- # echo '' 00:04:37.853 INFO: shutting down applications... 00:04:37.853 02:00:02 json_config_extra_key -- json_config/json_config_extra_key.sh@27 -- # echo 'INFO: shutting down applications...' 00:04:37.853 02:00:02 json_config_extra_key -- json_config/json_config_extra_key.sh@28 -- # json_config_test_shutdown_app target 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@31 -- # local app=target 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@34 -- # [[ -n 22 ]] 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@35 -- # [[ -n 59616 ]] 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@38 -- # kill -SIGINT 59616 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i = 0 )) 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:37.853 02:00:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.421 02:00:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.421 02:00:02 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.421 02:00:02 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:38.421 02:00:02 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:38.987 02:00:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:38.987 02:00:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:38.987 02:00:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:38.987 02:00:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.247 02:00:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.247 02:00:03 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.247 02:00:03 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:39.247 02:00:03 json_config_extra_key -- json_config/common.sh@45 -- # sleep 0.5 00:04:39.819 02:00:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i++ )) 00:04:39.819 02:00:04 json_config_extra_key -- json_config/common.sh@40 -- # (( i < 30 )) 00:04:39.819 02:00:04 json_config_extra_key -- json_config/common.sh@41 -- # kill -0 59616 00:04:39.819 02:00:04 json_config_extra_key -- json_config/common.sh@42 -- # app_pid["$app"]= 00:04:39.819 02:00:04 json_config_extra_key -- json_config/common.sh@43 -- # break 00:04:39.819 SPDK target shutdown done 00:04:39.819 02:00:04 json_config_extra_key -- json_config/common.sh@48 -- # [[ -n '' ]] 00:04:39.819 02:00:04 json_config_extra_key -- json_config/common.sh@53 -- # echo 'SPDK target shutdown done' 00:04:39.819 Success 00:04:39.819 02:00:04 json_config_extra_key -- json_config/json_config_extra_key.sh@30 -- # echo Success 00:04:39.819 ************************************ 00:04:39.819 END TEST json_config_extra_key 00:04:39.819 ************************************ 00:04:39.819 00:04:39.819 real 0m3.159s 00:04:39.819 user 0m2.558s 00:04:39.819 sys 0m0.401s 00:04:39.819 02:00:04 json_config_extra_key -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:39.819 02:00:04 json_config_extra_key -- common/autotest_common.sh@10 -- # set +x 00:04:39.819 02:00:04 -- spdk/autotest.sh@161 -- # run_test alias_rpc /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:39.819 02:00:04 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:39.819 02:00:04 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:39.819 02:00:04 -- common/autotest_common.sh@10 -- # set +x 00:04:39.819 ************************************ 00:04:39.819 START TEST alias_rpc 00:04:39.819 ************************************ 00:04:39.819 02:00:04 alias_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc/alias_rpc.sh 00:04:40.079 * Looking for test storage... 00:04:40.079 * Found test storage at /home/vagrant/spdk_repo/spdk/test/json_config/alias_rpc 00:04:40.079 02:00:04 alias_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:40.079 02:00:04 alias_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:04:40.079 02:00:04 alias_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:40.079 02:00:04 alias_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:40.079 02:00:04 alias_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:40.079 02:00:04 alias_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:40.079 02:00:04 alias_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:40.079 02:00:04 alias_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:04:40.079 02:00:04 alias_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@344 -- # case "$op" in 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@345 -- # : 1 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@365 -- # decimal 1 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@353 -- # local d=1 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@355 -- # echo 1 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@366 -- # decimal 2 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@353 -- # local d=2 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@355 -- # echo 2 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:40.080 02:00:04 alias_rpc -- scripts/common.sh@368 -- # return 0 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.080 --rc genhtml_branch_coverage=1 00:04:40.080 --rc genhtml_function_coverage=1 00:04:40.080 --rc genhtml_legend=1 00:04:40.080 --rc geninfo_all_blocks=1 00:04:40.080 --rc geninfo_unexecuted_blocks=1 00:04:40.080 00:04:40.080 ' 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.080 --rc genhtml_branch_coverage=1 00:04:40.080 --rc genhtml_function_coverage=1 00:04:40.080 --rc genhtml_legend=1 00:04:40.080 --rc geninfo_all_blocks=1 00:04:40.080 --rc geninfo_unexecuted_blocks=1 00:04:40.080 00:04:40.080 ' 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.080 --rc genhtml_branch_coverage=1 00:04:40.080 --rc genhtml_function_coverage=1 00:04:40.080 --rc genhtml_legend=1 00:04:40.080 --rc geninfo_all_blocks=1 00:04:40.080 --rc geninfo_unexecuted_blocks=1 00:04:40.080 00:04:40.080 ' 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:40.080 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:40.080 --rc genhtml_branch_coverage=1 00:04:40.080 --rc genhtml_function_coverage=1 00:04:40.080 --rc genhtml_legend=1 00:04:40.080 --rc geninfo_all_blocks=1 00:04:40.080 --rc geninfo_unexecuted_blocks=1 00:04:40.080 00:04:40.080 ' 00:04:40.080 02:00:04 alias_rpc -- alias_rpc/alias_rpc.sh@10 -- # trap 'killprocess $spdk_tgt_pid; exit 1' ERR 00:04:40.080 02:00:04 alias_rpc -- alias_rpc/alias_rpc.sh@13 -- # spdk_tgt_pid=59715 00:04:40.080 02:00:04 alias_rpc -- alias_rpc/alias_rpc.sh@14 -- # waitforlisten 59715 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@835 -- # '[' -z 59715 ']' 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:40.080 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:40.080 02:00:04 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:40.080 02:00:04 alias_rpc -- alias_rpc/alias_rpc.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:40.080 [2024-12-15 02:00:04.776687] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:40.080 [2024-12-15 02:00:04.776965] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59715 ] 00:04:40.340 [2024-12-15 02:00:04.938278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:40.340 [2024-12-15 02:00:05.033882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:40.911 02:00:05 alias_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:40.911 02:00:05 alias_rpc -- common/autotest_common.sh@868 -- # return 0 00:04:40.911 02:00:05 alias_rpc -- alias_rpc/alias_rpc.sh@17 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config -i 00:04:41.171 02:00:05 alias_rpc -- alias_rpc/alias_rpc.sh@19 -- # killprocess 59715 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@954 -- # '[' -z 59715 ']' 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@958 -- # kill -0 59715 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@959 -- # uname 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59715 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:41.171 killing process with pid 59715 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59715' 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@973 -- # kill 59715 00:04:41.171 02:00:05 alias_rpc -- common/autotest_common.sh@978 -- # wait 59715 00:04:43.086 ************************************ 00:04:43.086 END TEST alias_rpc 00:04:43.086 ************************************ 00:04:43.086 00:04:43.086 real 0m2.827s 00:04:43.086 user 0m2.908s 00:04:43.086 sys 0m0.398s 00:04:43.086 02:00:07 alias_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:43.086 02:00:07 alias_rpc -- common/autotest_common.sh@10 -- # set +x 00:04:43.086 02:00:07 -- spdk/autotest.sh@163 -- # [[ 0 -eq 0 ]] 00:04:43.086 02:00:07 -- spdk/autotest.sh@164 -- # run_test spdkcli_tcp /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:43.086 02:00:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:43.086 02:00:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:43.086 02:00:07 -- common/autotest_common.sh@10 -- # set +x 00:04:43.086 ************************************ 00:04:43.086 START TEST spdkcli_tcp 00:04:43.086 ************************************ 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/spdkcli/tcp.sh 00:04:43.086 * Looking for test storage... 00:04:43.086 * Found test storage at /home/vagrant/spdk_repo/spdk/test/spdkcli 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lcov --version 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@336 -- # IFS=.-: 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@336 -- # read -ra ver1 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@337 -- # IFS=.-: 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@337 -- # read -ra ver2 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@338 -- # local 'op=<' 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@340 -- # ver1_l=2 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@341 -- # ver2_l=1 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@344 -- # case "$op" in 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@345 -- # : 1 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@365 -- # decimal 1 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=1 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 1 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@365 -- # ver1[v]=1 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@366 -- # decimal 2 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@353 -- # local d=2 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@355 -- # echo 2 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@366 -- # ver2[v]=2 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:43.086 02:00:07 spdkcli_tcp -- scripts/common.sh@368 -- # return 0 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:43.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.086 --rc genhtml_branch_coverage=1 00:04:43.086 --rc genhtml_function_coverage=1 00:04:43.086 --rc genhtml_legend=1 00:04:43.086 --rc geninfo_all_blocks=1 00:04:43.086 --rc geninfo_unexecuted_blocks=1 00:04:43.086 00:04:43.086 ' 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:43.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.086 --rc genhtml_branch_coverage=1 00:04:43.086 --rc genhtml_function_coverage=1 00:04:43.086 --rc genhtml_legend=1 00:04:43.086 --rc geninfo_all_blocks=1 00:04:43.086 --rc geninfo_unexecuted_blocks=1 00:04:43.086 00:04:43.086 ' 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:43.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.086 --rc genhtml_branch_coverage=1 00:04:43.086 --rc genhtml_function_coverage=1 00:04:43.086 --rc genhtml_legend=1 00:04:43.086 --rc geninfo_all_blocks=1 00:04:43.086 --rc geninfo_unexecuted_blocks=1 00:04:43.086 00:04:43.086 ' 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:43.086 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:43.086 --rc genhtml_branch_coverage=1 00:04:43.086 --rc genhtml_function_coverage=1 00:04:43.086 --rc genhtml_legend=1 00:04:43.086 --rc geninfo_all_blocks=1 00:04:43.086 --rc geninfo_unexecuted_blocks=1 00:04:43.086 00:04:43.086 ' 00:04:43.086 02:00:07 spdkcli_tcp -- spdkcli/tcp.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/spdkcli/common.sh 00:04:43.086 02:00:07 spdkcli_tcp -- spdkcli/common.sh@6 -- # spdkcli_job=/home/vagrant/spdk_repo/spdk/test/spdkcli/spdkcli_job.py 00:04:43.086 02:00:07 spdkcli_tcp -- spdkcli/common.sh@7 -- # spdk_clear_config_py=/home/vagrant/spdk_repo/spdk/test/json_config/clear_config.py 00:04:43.086 02:00:07 spdkcli_tcp -- spdkcli/tcp.sh@18 -- # IP_ADDRESS=127.0.0.1 00:04:43.086 02:00:07 spdkcli_tcp -- spdkcli/tcp.sh@19 -- # PORT=9998 00:04:43.086 02:00:07 spdkcli_tcp -- spdkcli/tcp.sh@21 -- # trap 'err_cleanup; exit 1' SIGINT SIGTERM EXIT 00:04:43.086 02:00:07 spdkcli_tcp -- spdkcli/tcp.sh@23 -- # timing_enter run_spdk_tgt_tcp 00:04:43.086 02:00:07 spdkcli_tcp -- common/autotest_common.sh@726 -- # xtrace_disable 00:04:43.087 02:00:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.087 02:00:07 spdkcli_tcp -- spdkcli/tcp.sh@25 -- # spdk_tgt_pid=59805 00:04:43.087 02:00:07 spdkcli_tcp -- spdkcli/tcp.sh@27 -- # waitforlisten 59805 00:04:43.087 02:00:07 spdkcli_tcp -- common/autotest_common.sh@835 -- # '[' -z 59805 ']' 00:04:43.087 02:00:07 spdkcli_tcp -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:43.087 02:00:07 spdkcli_tcp -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:43.087 02:00:07 spdkcli_tcp -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:43.087 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:43.087 02:00:07 spdkcli_tcp -- spdkcli/tcp.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -p 0 00:04:43.087 02:00:07 spdkcli_tcp -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:43.087 02:00:07 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:43.087 [2024-12-15 02:00:07.677345] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:43.087 [2024-12-15 02:00:07.677455] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59805 ] 00:04:43.087 [2024-12-15 02:00:07.836012] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:43.348 [2024-12-15 02:00:07.931992] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:43.348 [2024-12-15 02:00:07.932059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:43.921 02:00:08 spdkcli_tcp -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:43.921 02:00:08 spdkcli_tcp -- common/autotest_common.sh@868 -- # return 0 00:04:43.921 02:00:08 spdkcli_tcp -- spdkcli/tcp.sh@31 -- # socat_pid=59822 00:04:43.921 02:00:08 spdkcli_tcp -- spdkcli/tcp.sh@33 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -r 100 -t 2 -s 127.0.0.1 -p 9998 rpc_get_methods 00:04:43.921 02:00:08 spdkcli_tcp -- spdkcli/tcp.sh@30 -- # socat TCP-LISTEN:9998 UNIX-CONNECT:/var/tmp/spdk.sock 00:04:44.182 [ 00:04:44.182 "bdev_malloc_delete", 00:04:44.182 "bdev_malloc_create", 00:04:44.182 "bdev_null_resize", 00:04:44.182 "bdev_null_delete", 00:04:44.182 "bdev_null_create", 00:04:44.182 "bdev_nvme_cuse_unregister", 00:04:44.182 "bdev_nvme_cuse_register", 00:04:44.182 "bdev_opal_new_user", 00:04:44.182 "bdev_opal_set_lock_state", 00:04:44.182 "bdev_opal_delete", 00:04:44.182 "bdev_opal_get_info", 00:04:44.182 "bdev_opal_create", 00:04:44.182 "bdev_nvme_opal_revert", 00:04:44.182 "bdev_nvme_opal_init", 00:04:44.182 "bdev_nvme_send_cmd", 00:04:44.182 "bdev_nvme_set_keys", 00:04:44.182 "bdev_nvme_get_path_iostat", 00:04:44.182 "bdev_nvme_get_mdns_discovery_info", 00:04:44.182 "bdev_nvme_stop_mdns_discovery", 00:04:44.182 "bdev_nvme_start_mdns_discovery", 00:04:44.182 "bdev_nvme_set_multipath_policy", 00:04:44.182 "bdev_nvme_set_preferred_path", 00:04:44.182 "bdev_nvme_get_io_paths", 00:04:44.182 "bdev_nvme_remove_error_injection", 00:04:44.182 "bdev_nvme_add_error_injection", 00:04:44.182 "bdev_nvme_get_discovery_info", 00:04:44.182 "bdev_nvme_stop_discovery", 00:04:44.182 "bdev_nvme_start_discovery", 00:04:44.182 "bdev_nvme_get_controller_health_info", 00:04:44.182 "bdev_nvme_disable_controller", 00:04:44.182 "bdev_nvme_enable_controller", 00:04:44.182 "bdev_nvme_reset_controller", 00:04:44.182 "bdev_nvme_get_transport_statistics", 00:04:44.182 "bdev_nvme_apply_firmware", 00:04:44.182 "bdev_nvme_detach_controller", 00:04:44.182 "bdev_nvme_get_controllers", 00:04:44.182 "bdev_nvme_attach_controller", 00:04:44.182 "bdev_nvme_set_hotplug", 00:04:44.182 "bdev_nvme_set_options", 00:04:44.182 "bdev_passthru_delete", 00:04:44.182 "bdev_passthru_create", 00:04:44.182 "bdev_lvol_set_parent_bdev", 00:04:44.182 "bdev_lvol_set_parent", 00:04:44.182 "bdev_lvol_check_shallow_copy", 00:04:44.182 "bdev_lvol_start_shallow_copy", 00:04:44.182 "bdev_lvol_grow_lvstore", 00:04:44.182 "bdev_lvol_get_lvols", 00:04:44.182 "bdev_lvol_get_lvstores", 00:04:44.182 "bdev_lvol_delete", 00:04:44.182 "bdev_lvol_set_read_only", 00:04:44.182 "bdev_lvol_resize", 00:04:44.182 "bdev_lvol_decouple_parent", 00:04:44.182 "bdev_lvol_inflate", 00:04:44.182 "bdev_lvol_rename", 00:04:44.182 "bdev_lvol_clone_bdev", 00:04:44.182 "bdev_lvol_clone", 00:04:44.182 "bdev_lvol_snapshot", 00:04:44.182 "bdev_lvol_create", 00:04:44.182 "bdev_lvol_delete_lvstore", 00:04:44.182 "bdev_lvol_rename_lvstore", 00:04:44.182 "bdev_lvol_create_lvstore", 00:04:44.182 "bdev_raid_set_options", 00:04:44.183 "bdev_raid_remove_base_bdev", 00:04:44.183 "bdev_raid_add_base_bdev", 00:04:44.183 "bdev_raid_delete", 00:04:44.183 "bdev_raid_create", 00:04:44.183 "bdev_raid_get_bdevs", 00:04:44.183 "bdev_error_inject_error", 00:04:44.183 "bdev_error_delete", 00:04:44.183 "bdev_error_create", 00:04:44.183 "bdev_split_delete", 00:04:44.183 "bdev_split_create", 00:04:44.183 "bdev_delay_delete", 00:04:44.183 "bdev_delay_create", 00:04:44.183 "bdev_delay_update_latency", 00:04:44.183 "bdev_zone_block_delete", 00:04:44.183 "bdev_zone_block_create", 00:04:44.183 "blobfs_create", 00:04:44.183 "blobfs_detect", 00:04:44.183 "blobfs_set_cache_size", 00:04:44.183 "bdev_xnvme_delete", 00:04:44.183 "bdev_xnvme_create", 00:04:44.183 "bdev_aio_delete", 00:04:44.183 "bdev_aio_rescan", 00:04:44.183 "bdev_aio_create", 00:04:44.183 "bdev_ftl_set_property", 00:04:44.183 "bdev_ftl_get_properties", 00:04:44.183 "bdev_ftl_get_stats", 00:04:44.183 "bdev_ftl_unmap", 00:04:44.183 "bdev_ftl_unload", 00:04:44.183 "bdev_ftl_delete", 00:04:44.183 "bdev_ftl_load", 00:04:44.183 "bdev_ftl_create", 00:04:44.183 "bdev_virtio_attach_controller", 00:04:44.183 "bdev_virtio_scsi_get_devices", 00:04:44.183 "bdev_virtio_detach_controller", 00:04:44.183 "bdev_virtio_blk_set_hotplug", 00:04:44.183 "bdev_iscsi_delete", 00:04:44.183 "bdev_iscsi_create", 00:04:44.183 "bdev_iscsi_set_options", 00:04:44.183 "accel_error_inject_error", 00:04:44.183 "ioat_scan_accel_module", 00:04:44.183 "dsa_scan_accel_module", 00:04:44.183 "iaa_scan_accel_module", 00:04:44.183 "keyring_file_remove_key", 00:04:44.183 "keyring_file_add_key", 00:04:44.183 "keyring_linux_set_options", 00:04:44.183 "fsdev_aio_delete", 00:04:44.183 "fsdev_aio_create", 00:04:44.183 "iscsi_get_histogram", 00:04:44.183 "iscsi_enable_histogram", 00:04:44.183 "iscsi_set_options", 00:04:44.183 "iscsi_get_auth_groups", 00:04:44.183 "iscsi_auth_group_remove_secret", 00:04:44.183 "iscsi_auth_group_add_secret", 00:04:44.183 "iscsi_delete_auth_group", 00:04:44.183 "iscsi_create_auth_group", 00:04:44.183 "iscsi_set_discovery_auth", 00:04:44.183 "iscsi_get_options", 00:04:44.183 "iscsi_target_node_request_logout", 00:04:44.183 "iscsi_target_node_set_redirect", 00:04:44.183 "iscsi_target_node_set_auth", 00:04:44.183 "iscsi_target_node_add_lun", 00:04:44.183 "iscsi_get_stats", 00:04:44.183 "iscsi_get_connections", 00:04:44.183 "iscsi_portal_group_set_auth", 00:04:44.183 "iscsi_start_portal_group", 00:04:44.183 "iscsi_delete_portal_group", 00:04:44.183 "iscsi_create_portal_group", 00:04:44.183 "iscsi_get_portal_groups", 00:04:44.183 "iscsi_delete_target_node", 00:04:44.183 "iscsi_target_node_remove_pg_ig_maps", 00:04:44.183 "iscsi_target_node_add_pg_ig_maps", 00:04:44.183 "iscsi_create_target_node", 00:04:44.183 "iscsi_get_target_nodes", 00:04:44.183 "iscsi_delete_initiator_group", 00:04:44.183 "iscsi_initiator_group_remove_initiators", 00:04:44.183 "iscsi_initiator_group_add_initiators", 00:04:44.183 "iscsi_create_initiator_group", 00:04:44.183 "iscsi_get_initiator_groups", 00:04:44.183 "nvmf_set_crdt", 00:04:44.183 "nvmf_set_config", 00:04:44.183 "nvmf_set_max_subsystems", 00:04:44.183 "nvmf_stop_mdns_prr", 00:04:44.183 "nvmf_publish_mdns_prr", 00:04:44.183 "nvmf_subsystem_get_listeners", 00:04:44.183 "nvmf_subsystem_get_qpairs", 00:04:44.183 "nvmf_subsystem_get_controllers", 00:04:44.183 "nvmf_get_stats", 00:04:44.183 "nvmf_get_transports", 00:04:44.183 "nvmf_create_transport", 00:04:44.183 "nvmf_get_targets", 00:04:44.183 "nvmf_delete_target", 00:04:44.183 "nvmf_create_target", 00:04:44.183 "nvmf_subsystem_allow_any_host", 00:04:44.183 "nvmf_subsystem_set_keys", 00:04:44.183 "nvmf_subsystem_remove_host", 00:04:44.183 "nvmf_subsystem_add_host", 00:04:44.183 "nvmf_ns_remove_host", 00:04:44.183 "nvmf_ns_add_host", 00:04:44.183 "nvmf_subsystem_remove_ns", 00:04:44.183 "nvmf_subsystem_set_ns_ana_group", 00:04:44.183 "nvmf_subsystem_add_ns", 00:04:44.183 "nvmf_subsystem_listener_set_ana_state", 00:04:44.183 "nvmf_discovery_get_referrals", 00:04:44.183 "nvmf_discovery_remove_referral", 00:04:44.183 "nvmf_discovery_add_referral", 00:04:44.183 "nvmf_subsystem_remove_listener", 00:04:44.183 "nvmf_subsystem_add_listener", 00:04:44.183 "nvmf_delete_subsystem", 00:04:44.183 "nvmf_create_subsystem", 00:04:44.183 "nvmf_get_subsystems", 00:04:44.183 "env_dpdk_get_mem_stats", 00:04:44.183 "nbd_get_disks", 00:04:44.183 "nbd_stop_disk", 00:04:44.183 "nbd_start_disk", 00:04:44.183 "ublk_recover_disk", 00:04:44.183 "ublk_get_disks", 00:04:44.183 "ublk_stop_disk", 00:04:44.183 "ublk_start_disk", 00:04:44.183 "ublk_destroy_target", 00:04:44.183 "ublk_create_target", 00:04:44.183 "virtio_blk_create_transport", 00:04:44.183 "virtio_blk_get_transports", 00:04:44.183 "vhost_controller_set_coalescing", 00:04:44.183 "vhost_get_controllers", 00:04:44.183 "vhost_delete_controller", 00:04:44.183 "vhost_create_blk_controller", 00:04:44.183 "vhost_scsi_controller_remove_target", 00:04:44.183 "vhost_scsi_controller_add_target", 00:04:44.183 "vhost_start_scsi_controller", 00:04:44.183 "vhost_create_scsi_controller", 00:04:44.183 "thread_set_cpumask", 00:04:44.183 "scheduler_set_options", 00:04:44.183 "framework_get_governor", 00:04:44.183 "framework_get_scheduler", 00:04:44.183 "framework_set_scheduler", 00:04:44.183 "framework_get_reactors", 00:04:44.183 "thread_get_io_channels", 00:04:44.183 "thread_get_pollers", 00:04:44.183 "thread_get_stats", 00:04:44.183 "framework_monitor_context_switch", 00:04:44.183 "spdk_kill_instance", 00:04:44.183 "log_enable_timestamps", 00:04:44.183 "log_get_flags", 00:04:44.183 "log_clear_flag", 00:04:44.183 "log_set_flag", 00:04:44.183 "log_get_level", 00:04:44.183 "log_set_level", 00:04:44.183 "log_get_print_level", 00:04:44.183 "log_set_print_level", 00:04:44.183 "framework_enable_cpumask_locks", 00:04:44.183 "framework_disable_cpumask_locks", 00:04:44.183 "framework_wait_init", 00:04:44.183 "framework_start_init", 00:04:44.183 "scsi_get_devices", 00:04:44.183 "bdev_get_histogram", 00:04:44.183 "bdev_enable_histogram", 00:04:44.183 "bdev_set_qos_limit", 00:04:44.183 "bdev_set_qd_sampling_period", 00:04:44.183 "bdev_get_bdevs", 00:04:44.183 "bdev_reset_iostat", 00:04:44.183 "bdev_get_iostat", 00:04:44.183 "bdev_examine", 00:04:44.183 "bdev_wait_for_examine", 00:04:44.183 "bdev_set_options", 00:04:44.183 "accel_get_stats", 00:04:44.183 "accel_set_options", 00:04:44.183 "accel_set_driver", 00:04:44.183 "accel_crypto_key_destroy", 00:04:44.183 "accel_crypto_keys_get", 00:04:44.183 "accel_crypto_key_create", 00:04:44.183 "accel_assign_opc", 00:04:44.183 "accel_get_module_info", 00:04:44.183 "accel_get_opc_assignments", 00:04:44.183 "vmd_rescan", 00:04:44.183 "vmd_remove_device", 00:04:44.183 "vmd_enable", 00:04:44.183 "sock_get_default_impl", 00:04:44.183 "sock_set_default_impl", 00:04:44.183 "sock_impl_set_options", 00:04:44.183 "sock_impl_get_options", 00:04:44.183 "iobuf_get_stats", 00:04:44.183 "iobuf_set_options", 00:04:44.183 "keyring_get_keys", 00:04:44.183 "framework_get_pci_devices", 00:04:44.183 "framework_get_config", 00:04:44.183 "framework_get_subsystems", 00:04:44.183 "fsdev_set_opts", 00:04:44.183 "fsdev_get_opts", 00:04:44.183 "trace_get_info", 00:04:44.183 "trace_get_tpoint_group_mask", 00:04:44.183 "trace_disable_tpoint_group", 00:04:44.183 "trace_enable_tpoint_group", 00:04:44.183 "trace_clear_tpoint_mask", 00:04:44.183 "trace_set_tpoint_mask", 00:04:44.183 "notify_get_notifications", 00:04:44.183 "notify_get_types", 00:04:44.183 "spdk_get_version", 00:04:44.183 "rpc_get_methods" 00:04:44.183 ] 00:04:44.183 02:00:08 spdkcli_tcp -- spdkcli/tcp.sh@35 -- # timing_exit run_spdk_tgt_tcp 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@732 -- # xtrace_disable 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:44.183 02:00:08 spdkcli_tcp -- spdkcli/tcp.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:04:44.183 02:00:08 spdkcli_tcp -- spdkcli/tcp.sh@38 -- # killprocess 59805 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@954 -- # '[' -z 59805 ']' 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@958 -- # kill -0 59805 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # uname 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59805 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:44.183 killing process with pid 59805 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59805' 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@973 -- # kill 59805 00:04:44.183 02:00:08 spdkcli_tcp -- common/autotest_common.sh@978 -- # wait 59805 00:04:45.561 ************************************ 00:04:45.561 END TEST spdkcli_tcp 00:04:45.561 ************************************ 00:04:45.561 00:04:45.561 real 0m2.821s 00:04:45.561 user 0m5.032s 00:04:45.561 sys 0m0.447s 00:04:45.561 02:00:10 spdkcli_tcp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:45.561 02:00:10 spdkcli_tcp -- common/autotest_common.sh@10 -- # set +x 00:04:45.561 02:00:10 -- spdk/autotest.sh@167 -- # run_test dpdk_mem_utility /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.561 02:00:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:45.561 02:00:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:45.561 02:00:10 -- common/autotest_common.sh@10 -- # set +x 00:04:45.823 ************************************ 00:04:45.823 START TEST dpdk_mem_utility 00:04:45.823 ************************************ 00:04:45.823 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility/test_dpdk_mem_info.sh 00:04:45.823 * Looking for test storage... 00:04:45.823 * Found test storage at /home/vagrant/spdk_repo/spdk/test/dpdk_memory_utility 00:04:45.823 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:45.823 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lcov --version 00:04:45.823 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:45.823 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@336 -- # IFS=.-: 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@336 -- # read -ra ver1 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@337 -- # IFS=.-: 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@337 -- # read -ra ver2 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@338 -- # local 'op=<' 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@340 -- # ver1_l=2 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@341 -- # ver2_l=1 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@344 -- # case "$op" in 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@345 -- # : 1 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:45.823 02:00:10 dpdk_mem_utility -- scripts/common.sh@365 -- # decimal 1 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=1 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 1 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@365 -- # ver1[v]=1 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@366 -- # decimal 2 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@353 -- # local d=2 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@355 -- # echo 2 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@366 -- # ver2[v]=2 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:45.824 02:00:10 dpdk_mem_utility -- scripts/common.sh@368 -- # return 0 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:45.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.824 --rc genhtml_branch_coverage=1 00:04:45.824 --rc genhtml_function_coverage=1 00:04:45.824 --rc genhtml_legend=1 00:04:45.824 --rc geninfo_all_blocks=1 00:04:45.824 --rc geninfo_unexecuted_blocks=1 00:04:45.824 00:04:45.824 ' 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:45.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.824 --rc genhtml_branch_coverage=1 00:04:45.824 --rc genhtml_function_coverage=1 00:04:45.824 --rc genhtml_legend=1 00:04:45.824 --rc geninfo_all_blocks=1 00:04:45.824 --rc geninfo_unexecuted_blocks=1 00:04:45.824 00:04:45.824 ' 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:45.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.824 --rc genhtml_branch_coverage=1 00:04:45.824 --rc genhtml_function_coverage=1 00:04:45.824 --rc genhtml_legend=1 00:04:45.824 --rc geninfo_all_blocks=1 00:04:45.824 --rc geninfo_unexecuted_blocks=1 00:04:45.824 00:04:45.824 ' 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:45.824 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:45.824 --rc genhtml_branch_coverage=1 00:04:45.824 --rc genhtml_function_coverage=1 00:04:45.824 --rc genhtml_legend=1 00:04:45.824 --rc geninfo_all_blocks=1 00:04:45.824 --rc geninfo_unexecuted_blocks=1 00:04:45.824 00:04:45.824 ' 00:04:45.824 02:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@10 -- # MEM_SCRIPT=/home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:45.824 02:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@13 -- # spdkpid=59916 00:04:45.824 02:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@15 -- # waitforlisten 59916 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@835 -- # '[' -z 59916 ']' 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:45.824 02:00:10 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@12 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:04:45.824 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:45.824 02:00:10 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:45.824 [2024-12-15 02:00:10.531248] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:45.824 [2024-12-15 02:00:10.531342] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid59916 ] 00:04:46.085 [2024-12-15 02:00:10.684328] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:46.085 [2024-12-15 02:00:10.780804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:46.656 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:46.656 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@868 -- # return 0 00:04:46.656 02:00:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@17 -- # trap 'killprocess $spdkpid' SIGINT SIGTERM EXIT 00:04:46.656 02:00:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@19 -- # rpc_cmd env_dpdk_get_mem_stats 00:04:46.656 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:46.656 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:46.656 { 00:04:46.656 "filename": "/tmp/spdk_mem_dump.txt" 00:04:46.656 } 00:04:46.656 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:46.656 02:00:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py 00:04:46.656 DPDK memory size 824.000000 MiB in 1 heap(s) 00:04:46.656 1 heaps totaling size 824.000000 MiB 00:04:46.656 size: 824.000000 MiB heap id: 0 00:04:46.656 end heaps---------- 00:04:46.656 9 mempools totaling size 603.782043 MiB 00:04:46.656 size: 212.674988 MiB name: PDU_immediate_data_Pool 00:04:46.656 size: 158.602051 MiB name: PDU_data_out_Pool 00:04:46.656 size: 100.555481 MiB name: bdev_io_59916 00:04:46.656 size: 50.003479 MiB name: msgpool_59916 00:04:46.656 size: 36.509338 MiB name: fsdev_io_59916 00:04:46.656 size: 21.763794 MiB name: PDU_Pool 00:04:46.656 size: 19.513306 MiB name: SCSI_TASK_Pool 00:04:46.656 size: 4.133484 MiB name: evtpool_59916 00:04:46.656 size: 0.026123 MiB name: Session_Pool 00:04:46.656 end mempools------- 00:04:46.656 6 memzones totaling size 4.142822 MiB 00:04:46.656 size: 1.000366 MiB name: RG_ring_0_59916 00:04:46.656 size: 1.000366 MiB name: RG_ring_1_59916 00:04:46.656 size: 1.000366 MiB name: RG_ring_4_59916 00:04:46.656 size: 1.000366 MiB name: RG_ring_5_59916 00:04:46.656 size: 0.125366 MiB name: RG_ring_2_59916 00:04:46.656 size: 0.015991 MiB name: RG_ring_3_59916 00:04:46.656 end memzones------- 00:04:46.656 02:00:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@23 -- # /home/vagrant/spdk_repo/spdk/scripts/dpdk_mem_info.py -m 0 00:04:46.919 heap id: 0 total size: 824.000000 MiB number of busy elements: 322 number of free elements: 18 00:04:46.919 list of free elements. size: 16.779663 MiB 00:04:46.919 element at address: 0x200006400000 with size: 1.995972 MiB 00:04:46.919 element at address: 0x20000a600000 with size: 1.995972 MiB 00:04:46.919 element at address: 0x200003e00000 with size: 1.991028 MiB 00:04:46.919 element at address: 0x200019500040 with size: 0.999939 MiB 00:04:46.919 element at address: 0x200019900040 with size: 0.999939 MiB 00:04:46.919 element at address: 0x200019a00000 with size: 0.999084 MiB 00:04:46.919 element at address: 0x200032600000 with size: 0.994324 MiB 00:04:46.919 element at address: 0x200000400000 with size: 0.992004 MiB 00:04:46.919 element at address: 0x200019200000 with size: 0.959656 MiB 00:04:46.919 element at address: 0x200019d00040 with size: 0.936401 MiB 00:04:46.919 element at address: 0x200000200000 with size: 0.716980 MiB 00:04:46.919 element at address: 0x20001b400000 with size: 0.560974 MiB 00:04:46.919 element at address: 0x200000c00000 with size: 0.489197 MiB 00:04:46.919 element at address: 0x200019600000 with size: 0.487976 MiB 00:04:46.919 element at address: 0x200019e00000 with size: 0.485413 MiB 00:04:46.919 element at address: 0x200012c00000 with size: 0.433228 MiB 00:04:46.919 element at address: 0x200028800000 with size: 0.390686 MiB 00:04:46.919 element at address: 0x200000800000 with size: 0.350891 MiB 00:04:46.919 list of standard malloc elements. size: 199.289429 MiB 00:04:46.919 element at address: 0x20000a7fef80 with size: 132.000183 MiB 00:04:46.919 element at address: 0x2000065fef80 with size: 64.000183 MiB 00:04:46.919 element at address: 0x2000193fff80 with size: 1.000183 MiB 00:04:46.919 element at address: 0x2000197fff80 with size: 1.000183 MiB 00:04:46.919 element at address: 0x200019bfff80 with size: 1.000183 MiB 00:04:46.919 element at address: 0x2000003d9e80 with size: 0.140808 MiB 00:04:46.919 element at address: 0x200019deff40 with size: 0.062683 MiB 00:04:46.919 element at address: 0x2000003fdf40 with size: 0.007996 MiB 00:04:46.919 element at address: 0x20000a5ff040 with size: 0.000427 MiB 00:04:46.919 element at address: 0x200019defdc0 with size: 0.000366 MiB 00:04:46.919 element at address: 0x200012bff040 with size: 0.000305 MiB 00:04:46.919 element at address: 0x2000002d7b00 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000003d9d80 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fdf40 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe040 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe140 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe240 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe340 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe440 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe540 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe640 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe740 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe840 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fe940 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fea40 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004feb40 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fec40 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fed40 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fee40 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004fef40 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff040 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff140 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff240 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff340 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff440 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff540 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff640 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff740 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff840 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ff940 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ffbc0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ffcc0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000004ffdc0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e1c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e2c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e3c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e4c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e5c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e6c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e7c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e8c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087e9c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087eac0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087ebc0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087ecc0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087edc0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087eec0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087efc0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087f0c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087f1c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087f2c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087f3c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x20000087f4c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000008ff800 with size: 0.000244 MiB 00:04:46.919 element at address: 0x2000008ffa80 with size: 0.000244 MiB 00:04:46.919 element at address: 0x200000c7d3c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x200000c7d4c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x200000c7d5c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x200000c7d6c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x200000c7d7c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x200000c7d8c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x200000c7d9c0 with size: 0.000244 MiB 00:04:46.919 element at address: 0x200000c7dac0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7dbc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7dcc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7ddc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7dec0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7dfc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e0c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e1c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e2c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e3c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e4c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e5c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e6c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e7c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e8c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7e9c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7eac0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000c7ebc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000cfef00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200000cff000 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ff200 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ff300 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ff400 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ff500 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ff600 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ff700 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ff800 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ff900 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ffa00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ffb00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ffc00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ffd00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5ffe00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20000a5fff00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff180 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff280 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff380 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff480 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff580 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff680 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff780 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff880 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bff980 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bffa80 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bffb80 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bffc80 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012bfff00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6ee80 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6ef80 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f080 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f180 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f280 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f380 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f480 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f580 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f680 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f780 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012c6f880 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200012cefbc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x2000192fdd00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967cec0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967cfc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d0c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d1c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d2c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d3c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d4c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d5c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d6c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d7c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d8c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001967d9c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x2000196fdd00 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200019affc40 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200019defbc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200019defcc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x200019ebc680 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b48f9c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b48fac0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b48fbc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b48fcc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b48fdc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b48fec0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b48ffc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4900c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4901c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4902c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4903c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4904c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4905c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4906c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4907c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4908c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4909c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b490ac0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b490bc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b490cc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b490dc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b490ec0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b490fc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4910c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4911c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4912c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4913c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4914c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4915c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4916c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4917c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4918c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4919c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b491ac0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b491bc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b491cc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b491dc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b491ec0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b491fc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4920c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4921c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4922c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4923c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4924c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4925c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4926c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4927c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4928c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4929c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b492ac0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b492bc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b492cc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b492dc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b492ec0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b492fc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4930c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4931c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4932c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4933c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4934c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4935c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4936c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4937c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4938c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4939c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b493ac0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b493bc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b493cc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b493dc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b493ec0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b493fc0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4940c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4941c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4942c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4943c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4944c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4945c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4946c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4947c0 with size: 0.000244 MiB 00:04:46.920 element at address: 0x20001b4948c0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b4949c0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b494ac0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b494bc0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b494cc0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b494dc0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b494ec0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b494fc0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b4950c0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b4951c0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b4952c0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20001b4953c0 with size: 0.000244 MiB 00:04:46.921 element at address: 0x200028864040 with size: 0.000244 MiB 00:04:46.921 element at address: 0x200028864140 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ae00 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b080 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b180 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b280 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b380 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b480 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b580 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b680 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b780 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b880 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886b980 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ba80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886bb80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886bc80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886bd80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886be80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886bf80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c080 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c180 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c280 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c380 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c480 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c580 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c680 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c780 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c880 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886c980 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ca80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886cb80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886cc80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886cd80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ce80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886cf80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d080 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d180 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d280 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d380 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d480 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d580 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d680 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d780 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d880 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886d980 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886da80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886db80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886dc80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886dd80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886de80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886df80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e080 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e180 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e280 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e380 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e480 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e580 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e680 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e780 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e880 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886e980 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ea80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886eb80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ec80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ed80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ee80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886ef80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f080 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f180 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f280 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f380 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f480 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f580 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f680 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f780 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f880 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886f980 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886fa80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886fb80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886fc80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886fd80 with size: 0.000244 MiB 00:04:46.921 element at address: 0x20002886fe80 with size: 0.000244 MiB 00:04:46.921 list of memzone associated elements. size: 607.930908 MiB 00:04:46.921 element at address: 0x20001b4954c0 with size: 211.416809 MiB 00:04:46.921 associated memzone info: size: 211.416626 MiB name: MP_PDU_immediate_data_Pool_0 00:04:46.921 element at address: 0x20002886ff80 with size: 157.562622 MiB 00:04:46.921 associated memzone info: size: 157.562439 MiB name: MP_PDU_data_out_Pool_0 00:04:46.921 element at address: 0x200012df1e40 with size: 100.055115 MiB 00:04:46.921 associated memzone info: size: 100.054932 MiB name: MP_bdev_io_59916_0 00:04:46.921 element at address: 0x200000dff340 with size: 48.003113 MiB 00:04:46.921 associated memzone info: size: 48.002930 MiB name: MP_msgpool_59916_0 00:04:46.921 element at address: 0x200003ffdb40 with size: 36.008972 MiB 00:04:46.921 associated memzone info: size: 36.008789 MiB name: MP_fsdev_io_59916_0 00:04:46.921 element at address: 0x200019fbe900 with size: 20.255615 MiB 00:04:46.921 associated memzone info: size: 20.255432 MiB name: MP_PDU_Pool_0 00:04:46.921 element at address: 0x2000327feb00 with size: 18.005127 MiB 00:04:46.921 associated memzone info: size: 18.004944 MiB name: MP_SCSI_TASK_Pool_0 00:04:46.921 element at address: 0x2000004ffec0 with size: 3.000305 MiB 00:04:46.921 associated memzone info: size: 3.000122 MiB name: MP_evtpool_59916_0 00:04:46.921 element at address: 0x2000009ffdc0 with size: 2.000549 MiB 00:04:46.921 associated memzone info: size: 2.000366 MiB name: RG_MP_msgpool_59916 00:04:46.921 element at address: 0x2000002d7c00 with size: 1.008179 MiB 00:04:46.921 associated memzone info: size: 1.007996 MiB name: MP_evtpool_59916 00:04:46.921 element at address: 0x2000196fde00 with size: 1.008179 MiB 00:04:46.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_Pool 00:04:46.921 element at address: 0x200019ebc780 with size: 1.008179 MiB 00:04:46.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_immediate_data_Pool 00:04:46.921 element at address: 0x2000192fde00 with size: 1.008179 MiB 00:04:46.921 associated memzone info: size: 1.007996 MiB name: MP_PDU_data_out_Pool 00:04:46.921 element at address: 0x200012cefcc0 with size: 1.008179 MiB 00:04:46.921 associated memzone info: size: 1.007996 MiB name: MP_SCSI_TASK_Pool 00:04:46.921 element at address: 0x200000cff100 with size: 1.000549 MiB 00:04:46.921 associated memzone info: size: 1.000366 MiB name: RG_ring_0_59916 00:04:46.921 element at address: 0x2000008ffb80 with size: 1.000549 MiB 00:04:46.921 associated memzone info: size: 1.000366 MiB name: RG_ring_1_59916 00:04:46.921 element at address: 0x200019affd40 with size: 1.000549 MiB 00:04:46.921 associated memzone info: size: 1.000366 MiB name: RG_ring_4_59916 00:04:46.921 element at address: 0x2000326fe8c0 with size: 1.000549 MiB 00:04:46.921 associated memzone info: size: 1.000366 MiB name: RG_ring_5_59916 00:04:46.921 element at address: 0x20000087f5c0 with size: 0.500549 MiB 00:04:46.921 associated memzone info: size: 0.500366 MiB name: RG_MP_fsdev_io_59916 00:04:46.921 element at address: 0x200000c7ecc0 with size: 0.500549 MiB 00:04:46.921 associated memzone info: size: 0.500366 MiB name: RG_MP_bdev_io_59916 00:04:46.921 element at address: 0x20001967dac0 with size: 0.500549 MiB 00:04:46.921 associated memzone info: size: 0.500366 MiB name: RG_MP_PDU_Pool 00:04:46.921 element at address: 0x200012c6f980 with size: 0.500549 MiB 00:04:46.921 associated memzone info: size: 0.500366 MiB name: RG_MP_SCSI_TASK_Pool 00:04:46.921 element at address: 0x200019e7c440 with size: 0.250549 MiB 00:04:46.921 associated memzone info: size: 0.250366 MiB name: RG_MP_PDU_immediate_data_Pool 00:04:46.921 element at address: 0x2000002b78c0 with size: 0.125549 MiB 00:04:46.921 associated memzone info: size: 0.125366 MiB name: RG_MP_evtpool_59916 00:04:46.921 element at address: 0x20000085df80 with size: 0.125549 MiB 00:04:46.921 associated memzone info: size: 0.125366 MiB name: RG_ring_2_59916 00:04:46.921 element at address: 0x2000192f5ac0 with size: 0.031799 MiB 00:04:46.921 associated memzone info: size: 0.031616 MiB name: RG_MP_PDU_data_out_Pool 00:04:46.921 element at address: 0x200028864240 with size: 0.023804 MiB 00:04:46.921 associated memzone info: size: 0.023621 MiB name: MP_Session_Pool_0 00:04:46.921 element at address: 0x200000859d40 with size: 0.016174 MiB 00:04:46.921 associated memzone info: size: 0.015991 MiB name: RG_ring_3_59916 00:04:46.921 element at address: 0x20002886a3c0 with size: 0.002502 MiB 00:04:46.921 associated memzone info: size: 0.002319 MiB name: RG_MP_Session_Pool 00:04:46.921 element at address: 0x2000004ffa40 with size: 0.000366 MiB 00:04:46.921 associated memzone info: size: 0.000183 MiB name: MP_msgpool_59916 00:04:46.921 element at address: 0x2000008ff900 with size: 0.000366 MiB 00:04:46.922 associated memzone info: size: 0.000183 MiB name: MP_fsdev_io_59916 00:04:46.922 element at address: 0x200012bffd80 with size: 0.000366 MiB 00:04:46.922 associated memzone info: size: 0.000183 MiB name: MP_bdev_io_59916 00:04:46.922 element at address: 0x20002886af00 with size: 0.000366 MiB 00:04:46.922 associated memzone info: size: 0.000183 MiB name: MP_Session_Pool 00:04:46.922 02:00:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@25 -- # trap - SIGINT SIGTERM EXIT 00:04:46.922 02:00:11 dpdk_mem_utility -- dpdk_memory_utility/test_dpdk_mem_info.sh@26 -- # killprocess 59916 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@954 -- # '[' -z 59916 ']' 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@958 -- # kill -0 59916 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # uname 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 59916 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:04:46.922 killing process with pid 59916 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@972 -- # echo 'killing process with pid 59916' 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@973 -- # kill 59916 00:04:46.922 02:00:11 dpdk_mem_utility -- common/autotest_common.sh@978 -- # wait 59916 00:04:48.305 00:04:48.305 real 0m2.584s 00:04:48.305 user 0m2.545s 00:04:48.305 sys 0m0.382s 00:04:48.305 02:00:12 dpdk_mem_utility -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:48.305 02:00:12 dpdk_mem_utility -- common/autotest_common.sh@10 -- # set +x 00:04:48.305 ************************************ 00:04:48.305 END TEST dpdk_mem_utility 00:04:48.305 ************************************ 00:04:48.305 02:00:12 -- spdk/autotest.sh@168 -- # run_test event /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.305 02:00:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:48.305 02:00:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.305 02:00:12 -- common/autotest_common.sh@10 -- # set +x 00:04:48.305 ************************************ 00:04:48.305 START TEST event 00:04:48.305 ************************************ 00:04:48.305 02:00:12 event -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event.sh 00:04:48.305 * Looking for test storage... 00:04:48.305 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:04:48.305 02:00:13 event -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:48.305 02:00:13 event -- common/autotest_common.sh@1711 -- # lcov --version 00:04:48.305 02:00:13 event -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:48.564 02:00:13 event -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:48.564 02:00:13 event -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:48.564 02:00:13 event -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:48.564 02:00:13 event -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:48.564 02:00:13 event -- scripts/common.sh@336 -- # IFS=.-: 00:04:48.564 02:00:13 event -- scripts/common.sh@336 -- # read -ra ver1 00:04:48.564 02:00:13 event -- scripts/common.sh@337 -- # IFS=.-: 00:04:48.564 02:00:13 event -- scripts/common.sh@337 -- # read -ra ver2 00:04:48.564 02:00:13 event -- scripts/common.sh@338 -- # local 'op=<' 00:04:48.564 02:00:13 event -- scripts/common.sh@340 -- # ver1_l=2 00:04:48.564 02:00:13 event -- scripts/common.sh@341 -- # ver2_l=1 00:04:48.564 02:00:13 event -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:48.564 02:00:13 event -- scripts/common.sh@344 -- # case "$op" in 00:04:48.564 02:00:13 event -- scripts/common.sh@345 -- # : 1 00:04:48.564 02:00:13 event -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:48.564 02:00:13 event -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:48.564 02:00:13 event -- scripts/common.sh@365 -- # decimal 1 00:04:48.564 02:00:13 event -- scripts/common.sh@353 -- # local d=1 00:04:48.564 02:00:13 event -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:48.564 02:00:13 event -- scripts/common.sh@355 -- # echo 1 00:04:48.564 02:00:13 event -- scripts/common.sh@365 -- # ver1[v]=1 00:04:48.564 02:00:13 event -- scripts/common.sh@366 -- # decimal 2 00:04:48.564 02:00:13 event -- scripts/common.sh@353 -- # local d=2 00:04:48.564 02:00:13 event -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:48.564 02:00:13 event -- scripts/common.sh@355 -- # echo 2 00:04:48.564 02:00:13 event -- scripts/common.sh@366 -- # ver2[v]=2 00:04:48.564 02:00:13 event -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:48.564 02:00:13 event -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:48.564 02:00:13 event -- scripts/common.sh@368 -- # return 0 00:04:48.564 02:00:13 event -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:48.564 02:00:13 event -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:48.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.565 --rc genhtml_branch_coverage=1 00:04:48.565 --rc genhtml_function_coverage=1 00:04:48.565 --rc genhtml_legend=1 00:04:48.565 --rc geninfo_all_blocks=1 00:04:48.565 --rc geninfo_unexecuted_blocks=1 00:04:48.565 00:04:48.565 ' 00:04:48.565 02:00:13 event -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:48.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.565 --rc genhtml_branch_coverage=1 00:04:48.565 --rc genhtml_function_coverage=1 00:04:48.565 --rc genhtml_legend=1 00:04:48.565 --rc geninfo_all_blocks=1 00:04:48.565 --rc geninfo_unexecuted_blocks=1 00:04:48.565 00:04:48.565 ' 00:04:48.565 02:00:13 event -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:48.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.565 --rc genhtml_branch_coverage=1 00:04:48.565 --rc genhtml_function_coverage=1 00:04:48.565 --rc genhtml_legend=1 00:04:48.565 --rc geninfo_all_blocks=1 00:04:48.565 --rc geninfo_unexecuted_blocks=1 00:04:48.565 00:04:48.565 ' 00:04:48.565 02:00:13 event -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:48.565 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:48.565 --rc genhtml_branch_coverage=1 00:04:48.565 --rc genhtml_function_coverage=1 00:04:48.565 --rc genhtml_legend=1 00:04:48.565 --rc geninfo_all_blocks=1 00:04:48.565 --rc geninfo_unexecuted_blocks=1 00:04:48.565 00:04:48.565 ' 00:04:48.565 02:00:13 event -- event/event.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:04:48.565 02:00:13 event -- bdev/nbd_common.sh@6 -- # set -e 00:04:48.565 02:00:13 event -- event/event.sh@45 -- # run_test event_perf /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.565 02:00:13 event -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:04:48.565 02:00:13 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:48.565 02:00:13 event -- common/autotest_common.sh@10 -- # set +x 00:04:48.565 ************************************ 00:04:48.565 START TEST event_perf 00:04:48.565 ************************************ 00:04:48.565 02:00:13 event.event_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/event_perf/event_perf -m 0xF -t 1 00:04:48.565 Running I/O for 1 seconds...[2024-12-15 02:00:13.138464] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:48.565 [2024-12-15 02:00:13.138567] [ DPDK EAL parameters: event_perf --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60008 ] 00:04:48.565 [2024-12-15 02:00:13.294757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:48.823 [2024-12-15 02:00:13.372277] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:48.823 [2024-12-15 02:00:13.372465] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:48.823 Running I/O for 1 seconds...[2024-12-15 02:00:13.372794] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:48.823 [2024-12-15 02:00:13.372827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:49.759 00:04:49.759 lcore 0: 202360 00:04:49.759 lcore 1: 202362 00:04:49.759 lcore 2: 202362 00:04:49.759 lcore 3: 202363 00:04:49.759 done. 00:04:49.759 00:04:49.759 real 0m1.385s 00:04:49.759 user 0m4.204s 00:04:49.759 sys 0m0.066s 00:04:49.759 02:00:14 event.event_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:49.759 02:00:14 event.event_perf -- common/autotest_common.sh@10 -- # set +x 00:04:49.759 ************************************ 00:04:49.759 END TEST event_perf 00:04:49.759 ************************************ 00:04:50.017 02:00:14 event -- event/event.sh@46 -- # run_test event_reactor /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:50.017 02:00:14 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:50.017 02:00:14 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:50.017 02:00:14 event -- common/autotest_common.sh@10 -- # set +x 00:04:50.017 ************************************ 00:04:50.017 START TEST event_reactor 00:04:50.017 ************************************ 00:04:50.017 02:00:14 event.event_reactor -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor/reactor -t 1 00:04:50.017 [2024-12-15 02:00:14.582717] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:50.017 [2024-12-15 02:00:14.582823] [ DPDK EAL parameters: reactor --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60047 ] 00:04:50.017 [2024-12-15 02:00:14.738003] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:50.276 [2024-12-15 02:00:14.811116] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:51.218 test_start 00:04:51.218 oneshot 00:04:51.218 tick 100 00:04:51.218 tick 100 00:04:51.218 tick 250 00:04:51.218 tick 100 00:04:51.218 tick 100 00:04:51.218 tick 250 00:04:51.218 tick 100 00:04:51.218 tick 500 00:04:51.218 tick 100 00:04:51.218 tick 100 00:04:51.218 tick 250 00:04:51.218 tick 100 00:04:51.218 tick 100 00:04:51.218 test_end 00:04:51.218 00:04:51.218 real 0m1.375s 00:04:51.218 user 0m1.204s 00:04:51.218 sys 0m0.063s 00:04:51.218 02:00:15 event.event_reactor -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:51.218 ************************************ 00:04:51.218 END TEST event_reactor 00:04:51.218 ************************************ 00:04:51.218 02:00:15 event.event_reactor -- common/autotest_common.sh@10 -- # set +x 00:04:51.218 02:00:15 event -- event/event.sh@47 -- # run_test event_reactor_perf /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.218 02:00:15 event -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:04:51.218 02:00:15 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:51.218 02:00:15 event -- common/autotest_common.sh@10 -- # set +x 00:04:51.218 ************************************ 00:04:51.218 START TEST event_reactor_perf 00:04:51.218 ************************************ 00:04:51.218 02:00:15 event.event_reactor_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/reactor_perf/reactor_perf -t 1 00:04:51.477 [2024-12-15 02:00:16.001633] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:51.477 [2024-12-15 02:00:16.001714] [ DPDK EAL parameters: reactor_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60084 ] 00:04:51.477 [2024-12-15 02:00:16.150271] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:04:51.477 [2024-12-15 02:00:16.222373] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:52.865 test_start 00:04:52.865 test_end 00:04:52.865 Performance: 412138 events per second 00:04:52.865 00:04:52.865 real 0m1.366s 00:04:52.865 user 0m1.205s 00:04:52.865 sys 0m0.054s 00:04:52.865 02:00:17 event.event_reactor_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:52.865 ************************************ 00:04:52.865 END TEST event_reactor_perf 00:04:52.865 ************************************ 00:04:52.865 02:00:17 event.event_reactor_perf -- common/autotest_common.sh@10 -- # set +x 00:04:52.865 02:00:17 event -- event/event.sh@49 -- # uname -s 00:04:52.865 02:00:17 event -- event/event.sh@49 -- # '[' Linux = Linux ']' 00:04:52.865 02:00:17 event -- event/event.sh@50 -- # run_test event_scheduler /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.865 02:00:17 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:52.865 02:00:17 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:52.865 02:00:17 event -- common/autotest_common.sh@10 -- # set +x 00:04:52.865 ************************************ 00:04:52.865 START TEST event_scheduler 00:04:52.865 ************************************ 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler.sh 00:04:52.865 * Looking for test storage... 00:04:52.865 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event/scheduler 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1711 -- # lcov --version 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@333 -- # local ver1 ver1_l 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@334 -- # local ver2 ver2_l 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@336 -- # IFS=.-: 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@336 -- # read -ra ver1 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@337 -- # IFS=.-: 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@337 -- # read -ra ver2 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@338 -- # local 'op=<' 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@340 -- # ver1_l=2 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@341 -- # ver2_l=1 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@344 -- # case "$op" in 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@345 -- # : 1 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v = 0 )) 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@365 -- # decimal 1 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=1 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 1 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@365 -- # ver1[v]=1 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@366 -- # decimal 2 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@353 -- # local d=2 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@355 -- # echo 2 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@366 -- # ver2[v]=2 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:04:52.865 02:00:17 event.event_scheduler -- scripts/common.sh@368 -- # return 0 00:04:52.865 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:04:52.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.865 --rc genhtml_branch_coverage=1 00:04:52.865 --rc genhtml_function_coverage=1 00:04:52.865 --rc genhtml_legend=1 00:04:52.865 --rc geninfo_all_blocks=1 00:04:52.865 --rc geninfo_unexecuted_blocks=1 00:04:52.865 00:04:52.865 ' 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:04:52.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.865 --rc genhtml_branch_coverage=1 00:04:52.865 --rc genhtml_function_coverage=1 00:04:52.865 --rc genhtml_legend=1 00:04:52.865 --rc geninfo_all_blocks=1 00:04:52.865 --rc geninfo_unexecuted_blocks=1 00:04:52.865 00:04:52.865 ' 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:04:52.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.865 --rc genhtml_branch_coverage=1 00:04:52.865 --rc genhtml_function_coverage=1 00:04:52.865 --rc genhtml_legend=1 00:04:52.865 --rc geninfo_all_blocks=1 00:04:52.865 --rc geninfo_unexecuted_blocks=1 00:04:52.865 00:04:52.865 ' 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:04:52.865 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:04:52.865 --rc genhtml_branch_coverage=1 00:04:52.865 --rc genhtml_function_coverage=1 00:04:52.865 --rc genhtml_legend=1 00:04:52.865 --rc geninfo_all_blocks=1 00:04:52.865 --rc geninfo_unexecuted_blocks=1 00:04:52.865 00:04:52.865 ' 00:04:52.865 02:00:17 event.event_scheduler -- scheduler/scheduler.sh@29 -- # rpc=rpc_cmd 00:04:52.865 02:00:17 event.event_scheduler -- scheduler/scheduler.sh@35 -- # scheduler_pid=60149 00:04:52.865 02:00:17 event.event_scheduler -- scheduler/scheduler.sh@36 -- # trap 'killprocess $scheduler_pid; exit 1' SIGINT SIGTERM EXIT 00:04:52.865 02:00:17 event.event_scheduler -- scheduler/scheduler.sh@37 -- # waitforlisten 60149 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@835 -- # '[' -z 60149 ']' 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:04:52.865 02:00:17 event.event_scheduler -- scheduler/scheduler.sh@34 -- # /home/vagrant/spdk_repo/spdk/test/event/scheduler/scheduler -m 0xF -p 0x2 --wait-for-rpc -f 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:52.865 02:00:17 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:52.865 [2024-12-15 02:00:17.613457] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:52.865 [2024-12-15 02:00:17.613689] [ DPDK EAL parameters: scheduler --no-shconf -c 0xF --main-lcore=2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60149 ] 00:04:53.126 [2024-12-15 02:00:17.771000] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:04:53.126 [2024-12-15 02:00:17.872666] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:53.126 [2024-12-15 02:00:17.872940] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:53.126 [2024-12-15 02:00:17.873259] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:04:53.126 [2024-12-15 02:00:17.873486] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:04:54.068 02:00:18 event.event_scheduler -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:54.068 02:00:18 event.event_scheduler -- common/autotest_common.sh@868 -- # return 0 00:04:54.068 02:00:18 event.event_scheduler -- scheduler/scheduler.sh@39 -- # rpc_cmd framework_set_scheduler dynamic 00:04:54.068 02:00:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.068 02:00:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.068 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.069 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.069 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.069 POWER: Cannot set governor of lcore 0 to performance 00:04:54.069 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.069 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.069 POWER: failed to open /sys/devices/system/cpu/cpu%u/cpufreq/scaling_governor 00:04:54.069 POWER: Cannot set governor of lcore 0 to userspace 00:04:54.069 GUEST_CHANNEL: Opening channel '/dev/virtio-ports/virtio.serial.port.poweragent.0' for lcore 0 00:04:54.069 GUEST_CHANNEL: Unable to connect to '/dev/virtio-ports/virtio.serial.port.poweragent.0' with error No such file or directory 00:04:54.069 POWER: Unable to set Power Management Environment for lcore 0 00:04:54.069 [2024-12-15 02:00:18.491282] dpdk_governor.c: 135:_init_core: *ERROR*: Failed to initialize on core0 00:04:54.069 [2024-12-15 02:00:18.491303] dpdk_governor.c: 196:_init: *ERROR*: Failed to initialize on core0 00:04:54.069 [2024-12-15 02:00:18.491314] scheduler_dynamic.c: 280:init: *NOTICE*: Unable to initialize dpdk governor 00:04:54.069 [2024-12-15 02:00:18.491331] scheduler_dynamic.c: 427:set_opts: *NOTICE*: Setting scheduler load limit to 20 00:04:54.069 [2024-12-15 02:00:18.491339] scheduler_dynamic.c: 429:set_opts: *NOTICE*: Setting scheduler core limit to 80 00:04:54.069 [2024-12-15 02:00:18.491347] scheduler_dynamic.c: 431:set_opts: *NOTICE*: Setting scheduler core busy to 95 00:04:54.069 02:00:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler -- scheduler/scheduler.sh@40 -- # rpc_cmd framework_start_init 00:04:54.069 02:00:18 event.event_scheduler -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 [2024-12-15 02:00:18.717655] scheduler.c: 382:test_start: *NOTICE*: Scheduler test application started. 00:04:54.069 02:00:18 event.event_scheduler -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler -- scheduler/scheduler.sh@43 -- # run_test scheduler_create_thread scheduler_create_thread 00:04:54.069 02:00:18 event.event_scheduler -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:54.069 02:00:18 event.event_scheduler -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 ************************************ 00:04:54.069 START TEST scheduler_create_thread 00:04:54.069 ************************************ 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1129 -- # scheduler_create_thread 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@12 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x1 -a 100 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 2 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@13 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x2 -a 100 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 3 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@14 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x4 -a 100 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 4 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@15 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n active_pinned -m 0x8 -a 100 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 5 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@16 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x1 -a 0 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 6 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@17 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x2 -a 0 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 7 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@18 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x4 -a 0 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 8 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@19 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n idle_pinned -m 0x8 -a 0 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 9 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@21 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n one_third_active -a 30 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 10 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n half_active -a 0 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@22 -- # thread_id=11 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@23 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_set_active 11 50 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.069 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_create -n deleted -a 100 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@25 -- # thread_id=12 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- scheduler/scheduler.sh@26 -- # rpc_cmd --plugin scheduler_plugin scheduler_thread_delete 12 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@563 -- # xtrace_disable 00:04:54.331 02:00:18 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.270 ************************************ 00:04:55.270 END TEST scheduler_create_thread 00:04:55.270 ************************************ 00:04:55.270 02:00:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:04:55.270 00:04:55.270 real 0m1.173s 00:04:55.270 user 0m0.015s 00:04:55.270 sys 0m0.004s 00:04:55.270 02:00:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:55.270 02:00:19 event.event_scheduler.scheduler_create_thread -- common/autotest_common.sh@10 -- # set +x 00:04:55.270 02:00:19 event.event_scheduler -- scheduler/scheduler.sh@45 -- # trap - SIGINT SIGTERM EXIT 00:04:55.270 02:00:19 event.event_scheduler -- scheduler/scheduler.sh@46 -- # killprocess 60149 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@954 -- # '[' -z 60149 ']' 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@958 -- # kill -0 60149 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@959 -- # uname 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60149 00:04:55.270 killing process with pid 60149 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60149' 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@973 -- # kill 60149 00:04:55.270 02:00:19 event.event_scheduler -- common/autotest_common.sh@978 -- # wait 60149 00:04:55.837 [2024-12-15 02:00:20.389907] scheduler.c: 360:test_shutdown: *NOTICE*: Scheduler test application stopped. 00:04:56.406 00:04:56.406 real 0m3.558s 00:04:56.406 user 0m5.902s 00:04:56.406 sys 0m0.333s 00:04:56.406 02:00:20 event.event_scheduler -- common/autotest_common.sh@1130 -- # xtrace_disable 00:04:56.406 ************************************ 00:04:56.406 END TEST event_scheduler 00:04:56.406 ************************************ 00:04:56.406 02:00:20 event.event_scheduler -- common/autotest_common.sh@10 -- # set +x 00:04:56.406 02:00:21 event -- event/event.sh@51 -- # modprobe -n nbd 00:04:56.406 02:00:21 event -- event/event.sh@52 -- # run_test app_repeat app_repeat_test 00:04:56.406 02:00:21 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:04:56.406 02:00:21 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:04:56.406 02:00:21 event -- common/autotest_common.sh@10 -- # set +x 00:04:56.406 ************************************ 00:04:56.406 START TEST app_repeat 00:04:56.406 ************************************ 00:04:56.406 02:00:21 event.app_repeat -- common/autotest_common.sh@1129 -- # app_repeat_test 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@12 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@13 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@13 -- # local nbd_list 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@14 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@14 -- # local bdev_list 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@15 -- # local repeat_times=4 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@17 -- # modprobe nbd 00:04:56.406 Process app_repeat pid: 60244 00:04:56.406 spdk_app_start Round 0 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@19 -- # repeat_pid=60244 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@20 -- # trap 'killprocess $repeat_pid; exit 1' SIGINT SIGTERM EXIT 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@18 -- # /home/vagrant/spdk_repo/spdk/test/event/app_repeat/app_repeat -r /var/tmp/spdk-nbd.sock -m 0x3 -t 4 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@21 -- # echo 'Process app_repeat pid: 60244' 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 0' 00:04:56.406 02:00:21 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60244 /var/tmp/spdk-nbd.sock 00:04:56.406 02:00:21 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60244 ']' 00:04:56.406 02:00:21 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:04:56.406 02:00:21 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:04:56.406 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:04:56.406 02:00:21 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:04:56.406 02:00:21 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:04:56.406 02:00:21 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:04:56.406 [2024-12-15 02:00:21.062103] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:04:56.406 [2024-12-15 02:00:21.062228] [ DPDK EAL parameters: app_repeat --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60244 ] 00:04:56.667 [2024-12-15 02:00:21.218613] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:04:56.667 [2024-12-15 02:00:21.315901] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:04:56.667 [2024-12-15 02:00:21.315980] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:04:57.239 02:00:21 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:04:57.239 02:00:21 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:04:57.239 02:00:21 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.500 Malloc0 00:04:57.500 02:00:22 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:04:57.760 Malloc1 00:04:57.760 02:00:22 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:04:57.760 02:00:22 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:04:57.761 02:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:04:57.761 02:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:57.761 02:00:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:04:58.022 /dev/nbd0 00:04:58.022 02:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:04:58.022 02:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.022 1+0 records in 00:04:58.022 1+0 records out 00:04:58.022 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000282357 s, 14.5 MB/s 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.022 02:00:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.022 02:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.022 02:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.022 02:00:22 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:04:58.283 /dev/nbd1 00:04:58.283 02:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:04:58.283 02:00:22 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:04:58.283 1+0 records in 00:04:58.283 1+0 records out 00:04:58.283 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000311787 s, 13.1 MB/s 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:04:58.283 02:00:22 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:04:58.283 02:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:04:58.283 02:00:22 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:04:58.283 02:00:22 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:58.283 02:00:22 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.283 02:00:22 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:04:58.545 { 00:04:58.545 "nbd_device": "/dev/nbd0", 00:04:58.545 "bdev_name": "Malloc0" 00:04:58.545 }, 00:04:58.545 { 00:04:58.545 "nbd_device": "/dev/nbd1", 00:04:58.545 "bdev_name": "Malloc1" 00:04:58.545 } 00:04:58.545 ]' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:04:58.545 { 00:04:58.545 "nbd_device": "/dev/nbd0", 00:04:58.545 "bdev_name": "Malloc0" 00:04:58.545 }, 00:04:58.545 { 00:04:58.545 "nbd_device": "/dev/nbd1", 00:04:58.545 "bdev_name": "Malloc1" 00:04:58.545 } 00:04:58.545 ]' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:04:58.545 /dev/nbd1' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:04:58.545 /dev/nbd1' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:04:58.545 256+0 records in 00:04:58.545 256+0 records out 00:04:58.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00678708 s, 154 MB/s 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:04:58.545 256+0 records in 00:04:58.545 256+0 records out 00:04:58.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0204351 s, 51.3 MB/s 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:04:58.545 256+0 records in 00:04:58.545 256+0 records out 00:04:58.545 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0222872 s, 47.0 MB/s 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.545 02:00:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:04:58.803 02:00:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:04:58.803 02:00:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:04:58.803 02:00:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:04:58.803 02:00:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:58.803 02:00:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:58.804 02:00:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:04:58.804 02:00:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:58.804 02:00:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:58.804 02:00:23 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:04:58.804 02:00:23 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:04:59.062 02:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:04:59.320 02:00:23 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:04:59.320 02:00:23 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:04:59.578 02:00:24 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:00.145 [2024-12-15 02:00:24.845446] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:00.403 [2024-12-15 02:00:24.913932] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:00.403 [2024-12-15 02:00:24.914034] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:00.403 [2024-12-15 02:00:25.013016] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:00.403 [2024-12-15 02:00:25.013225] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:02.931 02:00:27 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:02.931 spdk_app_start Round 1 00:05:02.931 02:00:27 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 1' 00:05:02.931 02:00:27 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60244 /var/tmp/spdk-nbd.sock 00:05:02.931 02:00:27 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60244 ']' 00:05:02.931 02:00:27 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:02.931 02:00:27 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:02.931 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:02.931 02:00:27 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:02.931 02:00:27 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:02.931 02:00:27 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:02.931 02:00:27 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:02.931 02:00:27 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:02.931 02:00:27 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:02.931 Malloc0 00:05:02.931 02:00:27 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:03.189 Malloc1 00:05:03.189 02:00:27 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.189 02:00:27 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:03.447 /dev/nbd0 00:05:03.447 02:00:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:03.447 02:00:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.447 1+0 records in 00:05:03.447 1+0 records out 00:05:03.447 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000179693 s, 22.8 MB/s 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.447 02:00:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.447 02:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.447 02:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.447 02:00:28 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:03.704 /dev/nbd1 00:05:03.704 02:00:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:03.704 02:00:28 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:03.704 1+0 records in 00:05:03.704 1+0 records out 00:05:03.704 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000243447 s, 16.8 MB/s 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:03.704 02:00:28 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:03.704 02:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:03.704 02:00:28 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:03.704 02:00:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:03.704 02:00:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.704 02:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:03.962 02:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:03.962 { 00:05:03.962 "nbd_device": "/dev/nbd0", 00:05:03.962 "bdev_name": "Malloc0" 00:05:03.962 }, 00:05:03.962 { 00:05:03.962 "nbd_device": "/dev/nbd1", 00:05:03.962 "bdev_name": "Malloc1" 00:05:03.962 } 00:05:03.962 ]' 00:05:03.962 02:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:03.963 { 00:05:03.963 "nbd_device": "/dev/nbd0", 00:05:03.963 "bdev_name": "Malloc0" 00:05:03.963 }, 00:05:03.963 { 00:05:03.963 "nbd_device": "/dev/nbd1", 00:05:03.963 "bdev_name": "Malloc1" 00:05:03.963 } 00:05:03.963 ]' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:03.963 /dev/nbd1' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:03.963 /dev/nbd1' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:03.963 256+0 records in 00:05:03.963 256+0 records out 00:05:03.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00651045 s, 161 MB/s 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:03.963 256+0 records in 00:05:03.963 256+0 records out 00:05:03.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0195305 s, 53.7 MB/s 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:03.963 256+0 records in 00:05:03.963 256+0 records out 00:05:03.963 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0163472 s, 64.1 MB/s 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:03.963 02:00:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:04.221 02:00:28 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:04.480 02:00:29 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:04.480 02:00:29 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:04.738 02:00:29 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:05.338 [2024-12-15 02:00:30.058568] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:05.596 [2024-12-15 02:00:30.136920] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:05.596 [2024-12-15 02:00:30.137078] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:05.596 [2024-12-15 02:00:30.232595] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:05.596 [2024-12-15 02:00:30.232650] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:08.124 02:00:32 event.app_repeat -- event/event.sh@23 -- # for i in {0..2} 00:05:08.124 spdk_app_start Round 2 00:05:08.124 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:08.124 02:00:32 event.app_repeat -- event/event.sh@24 -- # echo 'spdk_app_start Round 2' 00:05:08.124 02:00:32 event.app_repeat -- event/event.sh@25 -- # waitforlisten 60244 /var/tmp/spdk-nbd.sock 00:05:08.124 02:00:32 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60244 ']' 00:05:08.124 02:00:32 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:08.124 02:00:32 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:08.124 02:00:32 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:08.124 02:00:32 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:08.124 02:00:32 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:08.124 02:00:32 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:08.124 02:00:32 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:08.124 02:00:32 event.app_repeat -- event/event.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.383 Malloc0 00:05:08.383 02:00:32 event.app_repeat -- event/event.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create 64 4096 00:05:08.641 Malloc1 00:05:08.641 02:00:33 event.app_repeat -- event/event.sh@30 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Malloc0 Malloc1' '/dev/nbd0 /dev/nbd1' 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # bdev_list=('Malloc0' 'Malloc1') 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@12 -- # local i 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc0 /dev/nbd0 00:05:08.641 /dev/nbd0 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.641 1+0 records in 00:05:08.641 1+0 records out 00:05:08.641 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00014755 s, 27.8 MB/s 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.641 02:00:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.641 02:00:33 event.app_repeat -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Malloc1 /dev/nbd1 00:05:08.900 /dev/nbd1 00:05:08.900 02:00:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:08.900 02:00:33 event.app_repeat -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@873 -- # local i 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@877 -- # break 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/event/nbdtest bs=4096 count=1 iflag=direct 00:05:08.900 1+0 records in 00:05:08.900 1+0 records out 00:05:08.900 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00020353 s, 20.1 MB/s 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@890 -- # size=4096 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/event/nbdtest 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:08.900 02:00:33 event.app_repeat -- common/autotest_common.sh@893 -- # return 0 00:05:08.900 02:00:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:08.900 02:00:33 event.app_repeat -- bdev/nbd_common.sh@14 -- # (( i < 2 )) 00:05:08.900 02:00:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:08.900 02:00:33 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:08.900 02:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:05:09.160 { 00:05:09.160 "nbd_device": "/dev/nbd0", 00:05:09.160 "bdev_name": "Malloc0" 00:05:09.160 }, 00:05:09.160 { 00:05:09.160 "nbd_device": "/dev/nbd1", 00:05:09.160 "bdev_name": "Malloc1" 00:05:09.160 } 00:05:09.160 ]' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[ 00:05:09.160 { 00:05:09.160 "nbd_device": "/dev/nbd0", 00:05:09.160 "bdev_name": "Malloc0" 00:05:09.160 }, 00:05:09.160 { 00:05:09.160 "nbd_device": "/dev/nbd1", 00:05:09.160 "bdev_name": "Malloc1" 00:05:09.160 } 00:05:09.160 ]' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:05:09.160 /dev/nbd1' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:05:09.160 /dev/nbd1' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=2 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 2 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@95 -- # count=2 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@96 -- # '[' 2 -ne 2 ']' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' write 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=write 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest bs=4096 count=256 00:05:09.160 256+0 records in 00:05:09.160 256+0 records out 00:05:09.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00583653 s, 180 MB/s 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:05:09.160 256+0 records in 00:05:09.160 256+0 records out 00:05:09.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.015035 s, 69.7 MB/s 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:05:09.160 256+0 records in 00:05:09.160 256+0 records out 00:05:09.160 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0147141 s, 71.3 MB/s 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1' verify 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@70 -- # local nbd_list 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@71 -- # local operation=verify 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd0 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest /dev/nbd1 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/event/nbdrandtest 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1' 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1') 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@51 -- # local i 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.160 02:00:33 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:09.419 02:00:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:09.420 02:00:34 event.app_repeat -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@41 -- # break 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@45 -- # return 0 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:09.678 02:00:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # echo '' 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # true 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@65 -- # count=0 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@66 -- # echo 0 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@104 -- # count=0 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:05:09.936 02:00:34 event.app_repeat -- bdev/nbd_common.sh@109 -- # return 0 00:05:09.936 02:00:34 event.app_repeat -- event/event.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock spdk_kill_instance SIGTERM 00:05:10.194 02:00:34 event.app_repeat -- event/event.sh@35 -- # sleep 3 00:05:10.761 [2024-12-15 02:00:35.414733] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:05:10.761 [2024-12-15 02:00:35.483414] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:10.761 [2024-12-15 02:00:35.483527] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:11.020 [2024-12-15 02:00:35.585028] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_register' already registered. 00:05:11.020 [2024-12-15 02:00:35.585072] notify.c: 45:spdk_notify_type_register: *NOTICE*: Notification type 'bdev_unregister' already registered. 00:05:13.549 02:00:37 event.app_repeat -- event/event.sh@38 -- # waitforlisten 60244 /var/tmp/spdk-nbd.sock 00:05:13.549 02:00:37 event.app_repeat -- common/autotest_common.sh@835 -- # '[' -z 60244 ']' 00:05:13.549 02:00:37 event.app_repeat -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:13.549 02:00:37 event.app_repeat -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:13.549 02:00:37 event.app_repeat -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:13.549 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:13.549 02:00:37 event.app_repeat -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:13.549 02:00:37 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@868 -- # return 0 00:05:13.549 02:00:38 event.app_repeat -- event/event.sh@39 -- # killprocess 60244 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@954 -- # '[' -z 60244 ']' 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@958 -- # kill -0 60244 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@959 -- # uname 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60244 00:05:13.549 killing process with pid 60244 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60244' 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@973 -- # kill 60244 00:05:13.549 02:00:38 event.app_repeat -- common/autotest_common.sh@978 -- # wait 60244 00:05:14.117 spdk_app_start is called in Round 0. 00:05:14.117 Shutdown signal received, stop current app iteration 00:05:14.117 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:14.117 spdk_app_start is called in Round 1. 00:05:14.117 Shutdown signal received, stop current app iteration 00:05:14.117 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:14.117 spdk_app_start is called in Round 2. 00:05:14.117 Shutdown signal received, stop current app iteration 00:05:14.117 Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 reinitialization... 00:05:14.117 spdk_app_start is called in Round 3. 00:05:14.117 Shutdown signal received, stop current app iteration 00:05:14.117 ************************************ 00:05:14.117 END TEST app_repeat 00:05:14.117 ************************************ 00:05:14.117 02:00:38 event.app_repeat -- event/event.sh@40 -- # trap - SIGINT SIGTERM EXIT 00:05:14.117 02:00:38 event.app_repeat -- event/event.sh@42 -- # return 0 00:05:14.117 00:05:14.117 real 0m17.598s 00:05:14.117 user 0m38.572s 00:05:14.117 sys 0m2.033s 00:05:14.117 02:00:38 event.app_repeat -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:14.117 02:00:38 event.app_repeat -- common/autotest_common.sh@10 -- # set +x 00:05:14.117 02:00:38 event -- event/event.sh@54 -- # (( SPDK_TEST_CRYPTO == 0 )) 00:05:14.117 02:00:38 event -- event/event.sh@55 -- # run_test cpu_locks /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:14.117 02:00:38 event -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.117 02:00:38 event -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.117 02:00:38 event -- common/autotest_common.sh@10 -- # set +x 00:05:14.117 ************************************ 00:05:14.117 START TEST cpu_locks 00:05:14.117 ************************************ 00:05:14.117 02:00:38 event.cpu_locks -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/event/cpu_locks.sh 00:05:14.117 * Looking for test storage... 00:05:14.117 * Found test storage at /home/vagrant/spdk_repo/spdk/test/event 00:05:14.117 02:00:38 event.cpu_locks -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:14.117 02:00:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:14.117 02:00:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # lcov --version 00:05:14.117 02:00:38 event.cpu_locks -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@336 -- # IFS=.-: 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@336 -- # read -ra ver1 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@337 -- # IFS=.-: 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@337 -- # read -ra ver2 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@338 -- # local 'op=<' 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@340 -- # ver1_l=2 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@341 -- # ver2_l=1 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@344 -- # case "$op" in 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@345 -- # : 1 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@365 -- # decimal 1 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=1 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 1 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@365 -- # ver1[v]=1 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@366 -- # decimal 2 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@353 -- # local d=2 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@355 -- # echo 2 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@366 -- # ver2[v]=2 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:14.117 02:00:38 event.cpu_locks -- scripts/common.sh@368 -- # return 0 00:05:14.117 02:00:38 event.cpu_locks -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:14.117 02:00:38 event.cpu_locks -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:14.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.118 --rc genhtml_branch_coverage=1 00:05:14.118 --rc genhtml_function_coverage=1 00:05:14.118 --rc genhtml_legend=1 00:05:14.118 --rc geninfo_all_blocks=1 00:05:14.118 --rc geninfo_unexecuted_blocks=1 00:05:14.118 00:05:14.118 ' 00:05:14.118 02:00:38 event.cpu_locks -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:14.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.118 --rc genhtml_branch_coverage=1 00:05:14.118 --rc genhtml_function_coverage=1 00:05:14.118 --rc genhtml_legend=1 00:05:14.118 --rc geninfo_all_blocks=1 00:05:14.118 --rc geninfo_unexecuted_blocks=1 00:05:14.118 00:05:14.118 ' 00:05:14.118 02:00:38 event.cpu_locks -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:14.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.118 --rc genhtml_branch_coverage=1 00:05:14.118 --rc genhtml_function_coverage=1 00:05:14.118 --rc genhtml_legend=1 00:05:14.118 --rc geninfo_all_blocks=1 00:05:14.118 --rc geninfo_unexecuted_blocks=1 00:05:14.118 00:05:14.118 ' 00:05:14.118 02:00:38 event.cpu_locks -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:14.118 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:14.118 --rc genhtml_branch_coverage=1 00:05:14.118 --rc genhtml_function_coverage=1 00:05:14.118 --rc genhtml_legend=1 00:05:14.118 --rc geninfo_all_blocks=1 00:05:14.118 --rc geninfo_unexecuted_blocks=1 00:05:14.118 00:05:14.118 ' 00:05:14.118 02:00:38 event.cpu_locks -- event/cpu_locks.sh@11 -- # rpc_sock1=/var/tmp/spdk.sock 00:05:14.118 02:00:38 event.cpu_locks -- event/cpu_locks.sh@12 -- # rpc_sock2=/var/tmp/spdk2.sock 00:05:14.118 02:00:38 event.cpu_locks -- event/cpu_locks.sh@164 -- # trap cleanup EXIT SIGTERM SIGINT 00:05:14.118 02:00:38 event.cpu_locks -- event/cpu_locks.sh@166 -- # run_test default_locks default_locks 00:05:14.118 02:00:38 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:14.118 02:00:38 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:14.118 02:00:38 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.118 ************************************ 00:05:14.118 START TEST default_locks 00:05:14.118 ************************************ 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@1129 -- # default_locks 00:05:14.118 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@46 -- # spdk_tgt_pid=60669 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@47 -- # waitforlisten 60669 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60669 ']' 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- event/cpu_locks.sh@45 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:14.118 02:00:38 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:14.379 [2024-12-15 02:00:38.896266] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:14.379 [2024-12-15 02:00:38.896380] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60669 ] 00:05:14.379 [2024-12-15 02:00:39.058416] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:14.641 [2024-12-15 02:00:39.153109] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 0 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@49 -- # locks_exist 60669 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # lslocks -p 60669 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- event/cpu_locks.sh@50 -- # killprocess 60669 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@954 -- # '[' -z 60669 ']' 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@958 -- # kill -0 60669 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # uname 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60669 00:05:15.215 killing process with pid 60669 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60669' 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@973 -- # kill 60669 00:05:15.215 02:00:39 event.cpu_locks.default_locks -- common/autotest_common.sh@978 -- # wait 60669 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@52 -- # NOT waitforlisten 60669 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@652 -- # local es=0 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 60669 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # waitforlisten 60669 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@835 -- # '[' -z 60669 ']' 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.602 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.602 ERROR: process (pid: 60669) is no longer running 00:05:16.602 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (60669) - No such process 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@868 -- # return 1 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@655 -- # es=1 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@54 -- # no_locks 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@26 -- # local lock_files 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:16.602 00:05:16.602 real 0m2.516s 00:05:16.602 user 0m2.499s 00:05:16.602 sys 0m0.457s 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:16.602 ************************************ 00:05:16.602 END TEST default_locks 00:05:16.602 ************************************ 00:05:16.602 02:00:41 event.cpu_locks.default_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.862 02:00:41 event.cpu_locks -- event/cpu_locks.sh@167 -- # run_test default_locks_via_rpc default_locks_via_rpc 00:05:16.862 02:00:41 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:16.862 02:00:41 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:16.862 02:00:41 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:16.862 ************************************ 00:05:16.862 START TEST default_locks_via_rpc 00:05:16.862 ************************************ 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1129 -- # default_locks_via_rpc 00:05:16.862 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@62 -- # spdk_tgt_pid=60733 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@63 -- # waitforlisten 60733 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 60733 ']' 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:16.862 02:00:41 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@61 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:16.862 [2024-12-15 02:00:41.468568] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:16.862 [2024-12-15 02:00:41.468675] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60733 ] 00:05:16.862 [2024-12-15 02:00:41.622541] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:17.120 [2024-12-15 02:00:41.697494] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@65 -- # rpc_cmd framework_disable_cpumask_locks 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@67 -- # no_locks 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # lock_files=() 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@26 -- # local lock_files 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@27 -- # (( 0 != 0 )) 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@69 -- # rpc_cmd framework_enable_cpumask_locks 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@71 -- # locks_exist 60733 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # lslocks -p 60733 00:05:17.687 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- event/cpu_locks.sh@73 -- # killprocess 60733 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@954 -- # '[' -z 60733 ']' 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@958 -- # kill -0 60733 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # uname 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60733 00:05:17.946 killing process with pid 60733 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60733' 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@973 -- # kill 60733 00:05:17.946 02:00:42 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@978 -- # wait 60733 00:05:19.321 ************************************ 00:05:19.321 END TEST default_locks_via_rpc 00:05:19.321 ************************************ 00:05:19.321 00:05:19.321 real 0m2.344s 00:05:19.321 user 0m2.368s 00:05:19.321 sys 0m0.434s 00:05:19.321 02:00:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:19.321 02:00:43 event.cpu_locks.default_locks_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:19.321 02:00:43 event.cpu_locks -- event/cpu_locks.sh@168 -- # run_test non_locking_app_on_locked_coremask non_locking_app_on_locked_coremask 00:05:19.321 02:00:43 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:19.321 02:00:43 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:19.321 02:00:43 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:19.321 ************************************ 00:05:19.321 START TEST non_locking_app_on_locked_coremask 00:05:19.321 ************************************ 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # non_locking_app_on_locked_coremask 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@80 -- # spdk_tgt_pid=60785 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@81 -- # waitforlisten 60785 /var/tmp/spdk.sock 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60785 ']' 00:05:19.321 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:19.321 02:00:43 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:19.321 [2024-12-15 02:00:43.870089] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:19.321 [2024-12-15 02:00:43.870227] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60785 ] 00:05:19.321 [2024-12-15 02:00:44.030630] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:19.607 [2024-12-15 02:00:44.127348] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:20.192 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@84 -- # spdk_tgt_pid2=60801 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@85 -- # waitforlisten 60801 /var/tmp/spdk2.sock 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60801 ']' 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@83 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks -r /var/tmp/spdk2.sock 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:20.192 02:00:44 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:20.192 [2024-12-15 02:00:44.791761] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:20.192 [2024-12-15 02:00:44.792066] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60801 ] 00:05:20.452 [2024-12-15 02:00:44.966261] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:20.452 [2024-12-15 02:00:44.966307] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:20.452 [2024-12-15 02:00:45.158620] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:21.833 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:21.833 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:21.834 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@87 -- # locks_exist 60785 00:05:21.834 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60785 00:05:21.834 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@89 -- # killprocess 60785 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60785 ']' 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60785 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60785 00:05:22.091 killing process with pid 60785 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60785' 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60785 00:05:22.091 02:00:46 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60785 00:05:24.618 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- event/cpu_locks.sh@90 -- # killprocess 60801 00:05:24.618 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60801 ']' 00:05:24.618 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60801 00:05:24.618 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:24.618 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:24.618 02:00:48 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60801 00:05:24.618 killing process with pid 60801 00:05:24.618 02:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:24.618 02:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:24.618 02:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60801' 00:05:24.618 02:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60801 00:05:24.618 02:00:49 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60801 00:05:25.550 00:05:25.550 real 0m6.368s 00:05:25.550 user 0m6.608s 00:05:25.550 sys 0m0.808s 00:05:25.550 02:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:25.550 02:00:50 event.cpu_locks.non_locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.550 ************************************ 00:05:25.550 END TEST non_locking_app_on_locked_coremask 00:05:25.550 ************************************ 00:05:25.550 02:00:50 event.cpu_locks -- event/cpu_locks.sh@169 -- # run_test locking_app_on_unlocked_coremask locking_app_on_unlocked_coremask 00:05:25.550 02:00:50 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:25.550 02:00:50 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:25.550 02:00:50 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:25.550 ************************************ 00:05:25.550 START TEST locking_app_on_unlocked_coremask 00:05:25.550 ************************************ 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_unlocked_coremask 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@98 -- # spdk_tgt_pid=60892 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@99 -- # waitforlisten 60892 /var/tmp/spdk.sock 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60892 ']' 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@97 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 --disable-cpumask-locks 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:25.550 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:25.550 02:00:50 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:25.550 [2024-12-15 02:00:50.306365] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:25.550 [2024-12-15 02:00:50.307006] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60892 ] 00:05:25.808 [2024-12-15 02:00:50.459185] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:25.808 [2024-12-15 02:00:50.459221] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:25.808 [2024-12-15 02:00:50.536271] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:26.372 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:26.372 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:26.372 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:26.372 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@102 -- # spdk_tgt_pid2=60908 00:05:26.373 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@101 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:26.373 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@103 -- # waitforlisten 60908 /var/tmp/spdk2.sock 00:05:26.373 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60908 ']' 00:05:26.373 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:26.373 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:26.373 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:26.373 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:26.373 02:00:51 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:26.630 [2024-12-15 02:00:51.207342] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:26.630 [2024-12-15 02:00:51.207656] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60908 ] 00:05:26.630 [2024-12-15 02:00:51.371650] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:26.901 [2024-12-15 02:00:51.525159] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:27.836 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:27.836 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:27.836 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@105 -- # locks_exist 60908 00:05:27.836 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60908 00:05:27.836 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@107 -- # killprocess 60892 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60892 ']' 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60892 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60892 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:28.094 killing process with pid 60892 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60892' 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60892 00:05:28.094 02:00:52 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60892 00:05:30.625 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- event/cpu_locks.sh@108 -- # killprocess 60908 00:05:30.625 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60908 ']' 00:05:30.625 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@958 -- # kill -0 60908 00:05:30.625 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:30.625 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:30.625 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60908 00:05:30.625 killing process with pid 60908 00:05:30.625 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:30.625 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:30.626 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60908' 00:05:30.626 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@973 -- # kill 60908 00:05:30.626 02:00:55 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@978 -- # wait 60908 00:05:31.563 00:05:31.563 real 0m6.059s 00:05:31.563 user 0m6.314s 00:05:31.563 sys 0m0.807s 00:05:31.563 ************************************ 00:05:31.563 02:00:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:31.563 02:00:56 event.cpu_locks.locking_app_on_unlocked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.563 END TEST locking_app_on_unlocked_coremask 00:05:31.563 ************************************ 00:05:31.822 02:00:56 event.cpu_locks -- event/cpu_locks.sh@170 -- # run_test locking_app_on_locked_coremask locking_app_on_locked_coremask 00:05:31.822 02:00:56 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:31.822 02:00:56 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:31.822 02:00:56 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:31.822 ************************************ 00:05:31.822 START TEST locking_app_on_locked_coremask 00:05:31.822 ************************************ 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1129 -- # locking_app_on_locked_coremask 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@115 -- # spdk_tgt_pid=60999 00:05:31.822 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@116 -- # waitforlisten 60999 /var/tmp/spdk.sock 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 60999 ']' 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@114 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:05:31.822 02:00:56 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:31.822 [2024-12-15 02:00:56.431648] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:31.822 [2024-12-15 02:00:56.431764] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid60999 ] 00:05:32.081 [2024-12-15 02:00:56.584769] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:32.081 [2024-12-15 02:00:56.660652] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@119 -- # spdk_tgt_pid2=61015 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@120 -- # NOT waitforlisten 61015 /var/tmp/spdk2.sock 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 -r /var/tmp/spdk2.sock 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61015 /var/tmp/spdk2.sock 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:32.647 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61015 /var/tmp/spdk2.sock 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@835 -- # '[' -z 61015 ']' 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:32.647 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:32.647 [2024-12-15 02:00:57.277722] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:32.647 [2024-12-15 02:00:57.277808] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61015 ] 00:05:32.905 [2024-12-15 02:00:57.435644] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 0, probably process 60999 has claimed it. 00:05:32.905 [2024-12-15 02:00:57.435694] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:33.163 ERROR: process (pid: 61015) is no longer running 00:05:33.163 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61015) - No such process 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@122 -- # locks_exist 60999 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # lslocks -p 60999 00:05:33.163 02:00:57 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@22 -- # grep -q spdk_cpu_lock 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- event/cpu_locks.sh@124 -- # killprocess 60999 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@954 -- # '[' -z 60999 ']' 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@958 -- # kill -0 60999 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # uname 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 60999 00:05:33.422 killing process with pid 60999 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 60999' 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@973 -- # kill 60999 00:05:33.422 02:00:58 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@978 -- # wait 60999 00:05:34.835 ************************************ 00:05:34.835 END TEST locking_app_on_locked_coremask 00:05:34.835 ************************************ 00:05:34.835 00:05:34.835 real 0m2.911s 00:05:34.835 user 0m3.092s 00:05:34.835 sys 0m0.499s 00:05:34.835 02:00:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:34.835 02:00:59 event.cpu_locks.locking_app_on_locked_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.835 02:00:59 event.cpu_locks -- event/cpu_locks.sh@171 -- # run_test locking_overlapped_coremask locking_overlapped_coremask 00:05:34.835 02:00:59 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:34.835 02:00:59 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:34.835 02:00:59 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:34.835 ************************************ 00:05:34.835 START TEST locking_overlapped_coremask 00:05:34.835 ************************************ 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@132 -- # spdk_tgt_pid=61068 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@133 -- # waitforlisten 61068 /var/tmp/spdk.sock 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61068 ']' 00:05:34.835 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@131 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:34.835 02:00:59 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:34.835 [2024-12-15 02:00:59.383590] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:34.835 [2024-12-15 02:00:59.383707] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61068 ] 00:05:34.835 [2024-12-15 02:00:59.538954] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:35.094 [2024-12-15 02:00:59.617675] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:35.094 [2024-12-15 02:00:59.617948] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:35.094 [2024-12-15 02:00:59.617974] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 0 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@135 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@136 -- # spdk_tgt_pid2=61086 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@137 -- # NOT waitforlisten 61086 /var/tmp/spdk2.sock 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@652 -- # local es=0 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@654 -- # valid_exec_arg waitforlisten 61086 /var/tmp/spdk2.sock 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@640 -- # local arg=waitforlisten 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # type -t waitforlisten 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # waitforlisten 61086 /var/tmp/spdk2.sock 00:05:35.660 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@835 -- # '[' -z 61086 ']' 00:05:35.661 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:35.661 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:35.661 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:35.661 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:35.661 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:35.661 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:35.661 [2024-12-15 02:01:00.310832] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:35.661 [2024-12-15 02:01:00.311158] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61086 ] 00:05:35.919 [2024-12-15 02:01:00.499002] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61068 has claimed it. 00:05:35.919 [2024-12-15 02:01:00.499054] app.c: 912:spdk_app_start: *ERROR*: Unable to acquire lock on assigned core mask - exiting. 00:05:36.177 ERROR: process (pid: 61086) is no longer running 00:05:36.177 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 850: kill: (61086) - No such process 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@868 -- # return 1 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@655 -- # es=1 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@139 -- # check_remaining_locks 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- event/cpu_locks.sh@141 -- # killprocess 61068 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@954 -- # '[' -z 61068 ']' 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@958 -- # kill -0 61068 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # uname 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61068 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61068' 00:05:36.177 killing process with pid 61068 00:05:36.177 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@973 -- # kill 61068 00:05:36.178 02:01:00 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@978 -- # wait 61068 00:05:37.552 00:05:37.552 real 0m2.793s 00:05:37.552 user 0m7.660s 00:05:37.552 sys 0m0.436s 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask -- common/autotest_common.sh@10 -- # set +x 00:05:37.552 ************************************ 00:05:37.552 END TEST locking_overlapped_coremask 00:05:37.552 ************************************ 00:05:37.552 02:01:02 event.cpu_locks -- event/cpu_locks.sh@172 -- # run_test locking_overlapped_coremask_via_rpc locking_overlapped_coremask_via_rpc 00:05:37.552 02:01:02 event.cpu_locks -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:37.552 02:01:02 event.cpu_locks -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:37.552 02:01:02 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:37.552 ************************************ 00:05:37.552 START TEST locking_overlapped_coremask_via_rpc 00:05:37.552 ************************************ 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1129 -- # locking_overlapped_coremask_via_rpc 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@148 -- # spdk_tgt_pid=61139 00:05:37.552 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@149 -- # waitforlisten 61139 /var/tmp/spdk.sock 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@147 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 --disable-cpumask-locks 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61139 ']' 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:37.552 02:01:02 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:37.552 [2024-12-15 02:01:02.217976] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:37.552 [2024-12-15 02:01:02.218211] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61139 ] 00:05:37.811 [2024-12-15 02:01:02.372064] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:37.811 [2024-12-15 02:01:02.372103] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:37.811 [2024-12-15 02:01:02.470583] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:37.811 [2024-12-15 02:01:02.470832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:37.811 [2024-12-15 02:01:02.470852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.377 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@152 -- # spdk_tgt_pid2=61156 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@153 -- # waitforlisten 61156 /var/tmp/spdk2.sock 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@151 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1c -r /var/tmp/spdk2.sock --disable-cpumask-locks 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61156 ']' 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:38.377 02:01:03 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:38.377 [2024-12-15 02:01:03.124103] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:38.377 [2024-12-15 02:01:03.124244] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1c --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61156 ] 00:05:38.635 [2024-12-15 02:01:03.288277] app.c: 916:spdk_app_start: *NOTICE*: CPU core locks deactivated. 00:05:38.635 [2024-12-15 02:01:03.288311] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:38.892 [2024-12-15 02:01:03.446398] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:05:38.892 [2024-12-15 02:01:03.446463] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:38.892 [2024-12-15 02:01:03.446488] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 4 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@155 -- # rpc_cmd framework_enable_cpumask_locks 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@156 -- # NOT rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@652 -- # local es=0 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # rpc_cmd -s /var/tmp/spdk2.sock framework_enable_cpumask_locks 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.827 [2024-12-15 02:01:04.361300] app.c: 781:claim_cpu_cores: *ERROR*: Cannot create lock on core 2, probably process 61139 has claimed it. 00:05:39.827 request: 00:05:39.827 { 00:05:39.827 "method": "framework_enable_cpumask_locks", 00:05:39.827 "req_id": 1 00:05:39.827 } 00:05:39.827 Got JSON-RPC error response 00:05:39.827 response: 00:05:39.827 { 00:05:39.827 "code": -32603, 00:05:39.827 "message": "Failed to claim CPU core: 2" 00:05:39.827 } 00:05:39.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@655 -- # es=1 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@158 -- # waitforlisten 61139 /var/tmp/spdk.sock 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61139 ']' 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:39.827 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock... 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@159 -- # waitforlisten 61156 /var/tmp/spdk2.sock 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@835 -- # '[' -z 61156 ']' 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk2.sock 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk2.sock...' 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:39.827 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.086 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:40.086 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@868 -- # return 0 00:05:40.086 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@161 -- # check_remaining_locks 00:05:40.086 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@36 -- # locks=(/var/tmp/spdk_cpu_lock_*) 00:05:40.086 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@37 -- # locks_expected=(/var/tmp/spdk_cpu_lock_{000..002}) 00:05:40.086 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- event/cpu_locks.sh@38 -- # [[ /var/tmp/spdk_cpu_lock_000 /var/tmp/spdk_cpu_lock_001 /var/tmp/spdk_cpu_lock_002 == \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\0\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\1\ \/\v\a\r\/\t\m\p\/\s\p\d\k\_\c\p\u\_\l\o\c\k\_\0\0\2 ]] 00:05:40.086 00:05:40.086 real 0m2.629s 00:05:40.086 user 0m1.058s 00:05:40.086 sys 0m0.120s 00:05:40.086 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:40.086 02:01:04 event.cpu_locks.locking_overlapped_coremask_via_rpc -- common/autotest_common.sh@10 -- # set +x 00:05:40.086 ************************************ 00:05:40.086 END TEST locking_overlapped_coremask_via_rpc 00:05:40.086 ************************************ 00:05:40.086 02:01:04 event.cpu_locks -- event/cpu_locks.sh@174 -- # cleanup 00:05:40.086 02:01:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61139 ]] 00:05:40.086 02:01:04 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61139 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61139 ']' 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61139 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61139 00:05:40.086 killing process with pid 61139 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61139' 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61139 00:05:40.086 02:01:04 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61139 00:05:41.460 02:01:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61156 ]] 00:05:41.460 02:01:06 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61156 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61156 ']' 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61156 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@959 -- # uname 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61156 00:05:41.460 killing process with pid 61156 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@960 -- # process_name=reactor_2 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@964 -- # '[' reactor_2 = sudo ']' 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61156' 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@973 -- # kill 61156 00:05:41.460 02:01:06 event.cpu_locks -- common/autotest_common.sh@978 -- # wait 61156 00:05:42.870 02:01:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:42.870 02:01:07 event.cpu_locks -- event/cpu_locks.sh@1 -- # cleanup 00:05:42.870 02:01:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # [[ -z 61139 ]] 00:05:42.870 02:01:07 event.cpu_locks -- event/cpu_locks.sh@15 -- # killprocess 61139 00:05:42.870 02:01:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61139 ']' 00:05:42.870 02:01:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61139 00:05:42.870 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61139) - No such process 00:05:42.870 Process with pid 61139 is not found 00:05:42.870 Process with pid 61156 is not found 00:05:42.870 02:01:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61139 is not found' 00:05:42.870 02:01:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # [[ -z 61156 ]] 00:05:42.870 02:01:07 event.cpu_locks -- event/cpu_locks.sh@16 -- # killprocess 61156 00:05:42.870 02:01:07 event.cpu_locks -- common/autotest_common.sh@954 -- # '[' -z 61156 ']' 00:05:42.870 02:01:07 event.cpu_locks -- common/autotest_common.sh@958 -- # kill -0 61156 00:05:42.870 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (61156) - No such process 00:05:42.870 02:01:07 event.cpu_locks -- common/autotest_common.sh@981 -- # echo 'Process with pid 61156 is not found' 00:05:42.870 02:01:07 event.cpu_locks -- event/cpu_locks.sh@18 -- # rm -f 00:05:42.870 ************************************ 00:05:42.870 END TEST cpu_locks 00:05:42.870 ************************************ 00:05:42.870 00:05:42.870 real 0m28.558s 00:05:42.870 user 0m48.469s 00:05:42.870 sys 0m4.303s 00:05:42.870 02:01:07 event.cpu_locks -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.870 02:01:07 event.cpu_locks -- common/autotest_common.sh@10 -- # set +x 00:05:42.870 ************************************ 00:05:42.870 END TEST event 00:05:42.870 ************************************ 00:05:42.870 00:05:42.870 real 0m54.309s 00:05:42.870 user 1m39.716s 00:05:42.870 sys 0m7.083s 00:05:42.870 02:01:07 event -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:42.870 02:01:07 event -- common/autotest_common.sh@10 -- # set +x 00:05:42.870 02:01:07 -- spdk/autotest.sh@169 -- # run_test thread /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:42.870 02:01:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:42.870 02:01:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.870 02:01:07 -- common/autotest_common.sh@10 -- # set +x 00:05:42.870 ************************************ 00:05:42.870 START TEST thread 00:05:42.870 ************************************ 00:05:42.870 02:01:07 thread -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/thread.sh 00:05:42.870 * Looking for test storage... 00:05:42.870 * Found test storage at /home/vagrant/spdk_repo/spdk/test/thread 00:05:42.870 02:01:07 thread -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:42.870 02:01:07 thread -- common/autotest_common.sh@1711 -- # lcov --version 00:05:42.870 02:01:07 thread -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:42.870 02:01:07 thread -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:42.870 02:01:07 thread -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:42.870 02:01:07 thread -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:42.870 02:01:07 thread -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:42.870 02:01:07 thread -- scripts/common.sh@336 -- # IFS=.-: 00:05:42.870 02:01:07 thread -- scripts/common.sh@336 -- # read -ra ver1 00:05:42.870 02:01:07 thread -- scripts/common.sh@337 -- # IFS=.-: 00:05:42.870 02:01:07 thread -- scripts/common.sh@337 -- # read -ra ver2 00:05:42.870 02:01:07 thread -- scripts/common.sh@338 -- # local 'op=<' 00:05:42.870 02:01:07 thread -- scripts/common.sh@340 -- # ver1_l=2 00:05:42.870 02:01:07 thread -- scripts/common.sh@341 -- # ver2_l=1 00:05:42.870 02:01:07 thread -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:42.870 02:01:07 thread -- scripts/common.sh@344 -- # case "$op" in 00:05:42.870 02:01:07 thread -- scripts/common.sh@345 -- # : 1 00:05:42.870 02:01:07 thread -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:42.870 02:01:07 thread -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:42.870 02:01:07 thread -- scripts/common.sh@365 -- # decimal 1 00:05:42.870 02:01:07 thread -- scripts/common.sh@353 -- # local d=1 00:05:42.870 02:01:07 thread -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:42.870 02:01:07 thread -- scripts/common.sh@355 -- # echo 1 00:05:42.870 02:01:07 thread -- scripts/common.sh@365 -- # ver1[v]=1 00:05:42.870 02:01:07 thread -- scripts/common.sh@366 -- # decimal 2 00:05:42.870 02:01:07 thread -- scripts/common.sh@353 -- # local d=2 00:05:42.870 02:01:07 thread -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:42.870 02:01:07 thread -- scripts/common.sh@355 -- # echo 2 00:05:42.870 02:01:07 thread -- scripts/common.sh@366 -- # ver2[v]=2 00:05:42.870 02:01:07 thread -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:42.870 02:01:07 thread -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:42.870 02:01:07 thread -- scripts/common.sh@368 -- # return 0 00:05:42.870 02:01:07 thread -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:42.870 02:01:07 thread -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:42.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.870 --rc genhtml_branch_coverage=1 00:05:42.870 --rc genhtml_function_coverage=1 00:05:42.870 --rc genhtml_legend=1 00:05:42.870 --rc geninfo_all_blocks=1 00:05:42.870 --rc geninfo_unexecuted_blocks=1 00:05:42.870 00:05:42.870 ' 00:05:42.870 02:01:07 thread -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:42.870 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.870 --rc genhtml_branch_coverage=1 00:05:42.870 --rc genhtml_function_coverage=1 00:05:42.870 --rc genhtml_legend=1 00:05:42.871 --rc geninfo_all_blocks=1 00:05:42.871 --rc geninfo_unexecuted_blocks=1 00:05:42.871 00:05:42.871 ' 00:05:42.871 02:01:07 thread -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:42.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.871 --rc genhtml_branch_coverage=1 00:05:42.871 --rc genhtml_function_coverage=1 00:05:42.871 --rc genhtml_legend=1 00:05:42.871 --rc geninfo_all_blocks=1 00:05:42.871 --rc geninfo_unexecuted_blocks=1 00:05:42.871 00:05:42.871 ' 00:05:42.871 02:01:07 thread -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:42.871 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:42.871 --rc genhtml_branch_coverage=1 00:05:42.871 --rc genhtml_function_coverage=1 00:05:42.871 --rc genhtml_legend=1 00:05:42.871 --rc geninfo_all_blocks=1 00:05:42.871 --rc geninfo_unexecuted_blocks=1 00:05:42.871 00:05:42.871 ' 00:05:42.871 02:01:07 thread -- thread/thread.sh@11 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:42.871 02:01:07 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:42.871 02:01:07 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:42.871 02:01:07 thread -- common/autotest_common.sh@10 -- # set +x 00:05:42.871 ************************************ 00:05:42.871 START TEST thread_poller_perf 00:05:42.871 ************************************ 00:05:42.871 02:01:07 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 1 -t 1 00:05:42.871 [2024-12-15 02:01:07.506131] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:42.871 [2024-12-15 02:01:07.506345] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61306 ] 00:05:43.129 [2024-12-15 02:01:07.662471] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:43.129 [2024-12-15 02:01:07.737815] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:43.129 Running 1000 pollers for 1 seconds with 1 microseconds period. 00:05:44.503 [2024-12-15T02:01:09.268Z] ====================================== 00:05:44.503 [2024-12-15T02:01:09.268Z] busy:2611835276 (cyc) 00:05:44.503 [2024-12-15T02:01:09.268Z] total_run_count: 405000 00:05:44.503 [2024-12-15T02:01:09.268Z] tsc_hz: 2600000000 (cyc) 00:05:44.503 [2024-12-15T02:01:09.268Z] ====================================== 00:05:44.503 [2024-12-15T02:01:09.268Z] poller_cost: 6448 (cyc), 2480 (nsec) 00:05:44.503 00:05:44.503 real 0m1.391s 00:05:44.503 user 0m1.217s 00:05:44.503 sys 0m0.067s 00:05:44.503 02:01:08 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:44.503 ************************************ 00:05:44.503 END TEST thread_poller_perf 00:05:44.503 ************************************ 00:05:44.503 02:01:08 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:44.503 02:01:08 thread -- thread/thread.sh@12 -- # run_test thread_poller_perf /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.503 02:01:08 thread -- common/autotest_common.sh@1105 -- # '[' 8 -le 1 ']' 00:05:44.503 02:01:08 thread -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:44.503 02:01:08 thread -- common/autotest_common.sh@10 -- # set +x 00:05:44.503 ************************************ 00:05:44.503 START TEST thread_poller_perf 00:05:44.503 ************************************ 00:05:44.503 02:01:08 thread.thread_poller_perf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/thread/poller_perf/poller_perf -b 1000 -l 0 -t 1 00:05:44.503 [2024-12-15 02:01:08.933944] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:44.503 [2024-12-15 02:01:08.934148] [ DPDK EAL parameters: poller_perf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61343 ] 00:05:44.503 [2024-12-15 02:01:09.089554] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:44.503 Running 1000 pollers for 1 seconds with 0 microseconds period. 00:05:44.503 [2024-12-15 02:01:09.163956] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:45.875 [2024-12-15T02:01:10.640Z] ====================================== 00:05:45.875 [2024-12-15T02:01:10.640Z] busy:2602699244 (cyc) 00:05:45.875 [2024-12-15T02:01:10.640Z] total_run_count: 4750000 00:05:45.875 [2024-12-15T02:01:10.640Z] tsc_hz: 2600000000 (cyc) 00:05:45.875 [2024-12-15T02:01:10.640Z] ====================================== 00:05:45.875 [2024-12-15T02:01:10.640Z] poller_cost: 547 (cyc), 210 (nsec) 00:05:45.875 00:05:45.875 real 0m1.382s 00:05:45.875 user 0m1.215s 00:05:45.875 sys 0m0.060s 00:05:45.875 02:01:10 thread.thread_poller_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.875 ************************************ 00:05:45.875 END TEST thread_poller_perf 00:05:45.875 ************************************ 00:05:45.875 02:01:10 thread.thread_poller_perf -- common/autotest_common.sh@10 -- # set +x 00:05:45.875 02:01:10 thread -- thread/thread.sh@17 -- # [[ y != \y ]] 00:05:45.875 ************************************ 00:05:45.875 END TEST thread 00:05:45.875 ************************************ 00:05:45.875 00:05:45.875 real 0m2.999s 00:05:45.875 user 0m2.543s 00:05:45.875 sys 0m0.237s 00:05:45.875 02:01:10 thread -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:45.875 02:01:10 thread -- common/autotest_common.sh@10 -- # set +x 00:05:45.875 02:01:10 -- spdk/autotest.sh@171 -- # [[ 0 -eq 1 ]] 00:05:45.875 02:01:10 -- spdk/autotest.sh@176 -- # run_test app_cmdline /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:45.876 02:01:10 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:45.876 02:01:10 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:45.876 02:01:10 -- common/autotest_common.sh@10 -- # set +x 00:05:45.876 ************************************ 00:05:45.876 START TEST app_cmdline 00:05:45.876 ************************************ 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/cmdline.sh 00:05:45.876 * Looking for test storage... 00:05:45.876 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1711 -- # lcov --version 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@336 -- # IFS=.-: 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@336 -- # read -ra ver1 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@337 -- # IFS=.-: 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@337 -- # read -ra ver2 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@338 -- # local 'op=<' 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@340 -- # ver1_l=2 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@341 -- # ver2_l=1 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@344 -- # case "$op" in 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@345 -- # : 1 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@365 -- # decimal 1 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@353 -- # local d=1 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@355 -- # echo 1 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@365 -- # ver1[v]=1 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@366 -- # decimal 2 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@353 -- # local d=2 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@355 -- # echo 2 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@366 -- # ver2[v]=2 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:45.876 02:01:10 app_cmdline -- scripts/common.sh@368 -- # return 0 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:45.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.876 --rc genhtml_branch_coverage=1 00:05:45.876 --rc genhtml_function_coverage=1 00:05:45.876 --rc genhtml_legend=1 00:05:45.876 --rc geninfo_all_blocks=1 00:05:45.876 --rc geninfo_unexecuted_blocks=1 00:05:45.876 00:05:45.876 ' 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:45.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.876 --rc genhtml_branch_coverage=1 00:05:45.876 --rc genhtml_function_coverage=1 00:05:45.876 --rc genhtml_legend=1 00:05:45.876 --rc geninfo_all_blocks=1 00:05:45.876 --rc geninfo_unexecuted_blocks=1 00:05:45.876 00:05:45.876 ' 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:45.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.876 --rc genhtml_branch_coverage=1 00:05:45.876 --rc genhtml_function_coverage=1 00:05:45.876 --rc genhtml_legend=1 00:05:45.876 --rc geninfo_all_blocks=1 00:05:45.876 --rc geninfo_unexecuted_blocks=1 00:05:45.876 00:05:45.876 ' 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:45.876 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:45.876 --rc genhtml_branch_coverage=1 00:05:45.876 --rc genhtml_function_coverage=1 00:05:45.876 --rc genhtml_legend=1 00:05:45.876 --rc geninfo_all_blocks=1 00:05:45.876 --rc geninfo_unexecuted_blocks=1 00:05:45.876 00:05:45.876 ' 00:05:45.876 02:01:10 app_cmdline -- app/cmdline.sh@14 -- # trap 'killprocess $spdk_tgt_pid' EXIT 00:05:45.876 02:01:10 app_cmdline -- app/cmdline.sh@17 -- # spdk_tgt_pid=61426 00:05:45.876 02:01:10 app_cmdline -- app/cmdline.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --rpcs-allowed spdk_get_version,rpc_get_methods 00:05:45.876 02:01:10 app_cmdline -- app/cmdline.sh@18 -- # waitforlisten 61426 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@835 -- # '[' -z 61426 ']' 00:05:45.876 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:45.876 02:01:10 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:45.876 [2024-12-15 02:01:10.569577] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:45.876 [2024-12-15 02:01:10.569914] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61426 ] 00:05:46.134 [2024-12-15 02:01:10.724779] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:46.134 [2024-12-15 02:01:10.798765] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:46.700 02:01:11 app_cmdline -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:46.700 02:01:11 app_cmdline -- common/autotest_common.sh@868 -- # return 0 00:05:46.700 02:01:11 app_cmdline -- app/cmdline.sh@20 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py spdk_get_version 00:05:46.958 { 00:05:46.958 "version": "SPDK v25.01-pre git sha1 e01cb43b8", 00:05:46.958 "fields": { 00:05:46.958 "major": 25, 00:05:46.958 "minor": 1, 00:05:46.958 "patch": 0, 00:05:46.958 "suffix": "-pre", 00:05:46.958 "commit": "e01cb43b8" 00:05:46.958 } 00:05:46.958 } 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@22 -- # expected_methods=() 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@23 -- # expected_methods+=("rpc_get_methods") 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@24 -- # expected_methods+=("spdk_get_version") 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@26 -- # methods=($(rpc_cmd rpc_get_methods | jq -r ".[]" | sort)) 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@26 -- # rpc_cmd rpc_get_methods 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@26 -- # jq -r '.[]' 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@26 -- # sort 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@27 -- # (( 2 == 2 )) 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@28 -- # [[ rpc_get_methods spdk_get_version == \r\p\c\_\g\e\t\_\m\e\t\h\o\d\s\ \s\p\d\k\_\g\e\t\_\v\e\r\s\i\o\n ]] 00:05:46.958 02:01:11 app_cmdline -- app/cmdline.sh@30 -- # NOT /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@652 -- # local es=0 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@654 -- # valid_exec_arg /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@640 -- # local arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@644 -- # type -t /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@646 -- # type -P /home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@646 -- # arg=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@646 -- # [[ -x /home/vagrant/spdk_repo/spdk/scripts/rpc.py ]] 00:05:46.958 02:01:11 app_cmdline -- common/autotest_common.sh@655 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py env_dpdk_get_mem_stats 00:05:46.958 request: 00:05:46.958 { 00:05:46.958 "method": "env_dpdk_get_mem_stats", 00:05:46.958 "req_id": 1 00:05:46.958 } 00:05:46.958 Got JSON-RPC error response 00:05:46.958 response: 00:05:46.958 { 00:05:46.958 "code": -32601, 00:05:46.958 "message": "Method not found" 00:05:46.958 } 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@655 -- # es=1 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:05:47.216 02:01:11 app_cmdline -- app/cmdline.sh@1 -- # killprocess 61426 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@954 -- # '[' -z 61426 ']' 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@958 -- # kill -0 61426 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@959 -- # uname 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61426 00:05:47.216 killing process with pid 61426 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61426' 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@973 -- # kill 61426 00:05:47.216 02:01:11 app_cmdline -- common/autotest_common.sh@978 -- # wait 61426 00:05:48.593 00:05:48.593 real 0m2.554s 00:05:48.593 user 0m2.783s 00:05:48.593 sys 0m0.386s 00:05:48.593 02:01:12 app_cmdline -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.593 02:01:12 app_cmdline -- common/autotest_common.sh@10 -- # set +x 00:05:48.593 ************************************ 00:05:48.593 END TEST app_cmdline 00:05:48.593 ************************************ 00:05:48.593 02:01:12 -- spdk/autotest.sh@177 -- # run_test version /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:48.593 02:01:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:05:48.593 02:01:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.593 02:01:12 -- common/autotest_common.sh@10 -- # set +x 00:05:48.593 ************************************ 00:05:48.593 START TEST version 00:05:48.593 ************************************ 00:05:48.593 02:01:12 version -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/app/version.sh 00:05:48.593 * Looking for test storage... 00:05:48.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/app 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.593 02:01:13 version -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.593 02:01:13 version -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.593 02:01:13 version -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.593 02:01:13 version -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.593 02:01:13 version -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.593 02:01:13 version -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.593 02:01:13 version -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.593 02:01:13 version -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.593 02:01:13 version -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.593 02:01:13 version -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.593 02:01:13 version -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.593 02:01:13 version -- scripts/common.sh@344 -- # case "$op" in 00:05:48.593 02:01:13 version -- scripts/common.sh@345 -- # : 1 00:05:48.593 02:01:13 version -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.593 02:01:13 version -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.593 02:01:13 version -- scripts/common.sh@365 -- # decimal 1 00:05:48.593 02:01:13 version -- scripts/common.sh@353 -- # local d=1 00:05:48.593 02:01:13 version -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.593 02:01:13 version -- scripts/common.sh@355 -- # echo 1 00:05:48.593 02:01:13 version -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.593 02:01:13 version -- scripts/common.sh@366 -- # decimal 2 00:05:48.593 02:01:13 version -- scripts/common.sh@353 -- # local d=2 00:05:48.593 02:01:13 version -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.593 02:01:13 version -- scripts/common.sh@355 -- # echo 2 00:05:48.593 02:01:13 version -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.593 02:01:13 version -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.593 02:01:13 version -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.593 02:01:13 version -- scripts/common.sh@368 -- # return 0 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.593 --rc genhtml_branch_coverage=1 00:05:48.593 --rc genhtml_function_coverage=1 00:05:48.593 --rc genhtml_legend=1 00:05:48.593 --rc geninfo_all_blocks=1 00:05:48.593 --rc geninfo_unexecuted_blocks=1 00:05:48.593 00:05:48.593 ' 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.593 --rc genhtml_branch_coverage=1 00:05:48.593 --rc genhtml_function_coverage=1 00:05:48.593 --rc genhtml_legend=1 00:05:48.593 --rc geninfo_all_blocks=1 00:05:48.593 --rc geninfo_unexecuted_blocks=1 00:05:48.593 00:05:48.593 ' 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.593 --rc genhtml_branch_coverage=1 00:05:48.593 --rc genhtml_function_coverage=1 00:05:48.593 --rc genhtml_legend=1 00:05:48.593 --rc geninfo_all_blocks=1 00:05:48.593 --rc geninfo_unexecuted_blocks=1 00:05:48.593 00:05:48.593 ' 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.593 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.593 --rc genhtml_branch_coverage=1 00:05:48.593 --rc genhtml_function_coverage=1 00:05:48.593 --rc genhtml_legend=1 00:05:48.593 --rc geninfo_all_blocks=1 00:05:48.593 --rc geninfo_unexecuted_blocks=1 00:05:48.593 00:05:48.593 ' 00:05:48.593 02:01:13 version -- app/version.sh@17 -- # get_header_version major 00:05:48.593 02:01:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MAJOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.593 02:01:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.593 02:01:13 version -- app/version.sh@14 -- # cut -f2 00:05:48.593 02:01:13 version -- app/version.sh@17 -- # major=25 00:05:48.593 02:01:13 version -- app/version.sh@18 -- # get_header_version minor 00:05:48.593 02:01:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_MINOR[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.593 02:01:13 version -- app/version.sh@14 -- # cut -f2 00:05:48.593 02:01:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.593 02:01:13 version -- app/version.sh@18 -- # minor=1 00:05:48.593 02:01:13 version -- app/version.sh@19 -- # get_header_version patch 00:05:48.593 02:01:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.593 02:01:13 version -- app/version.sh@14 -- # cut -f2 00:05:48.593 02:01:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_PATCH[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.593 02:01:13 version -- app/version.sh@19 -- # patch=0 00:05:48.593 02:01:13 version -- app/version.sh@20 -- # get_header_version suffix 00:05:48.593 02:01:13 version -- app/version.sh@13 -- # grep -E '^#define SPDK_VERSION_SUFFIX[[:space:]]+' /home/vagrant/spdk_repo/spdk/include/spdk/version.h 00:05:48.593 02:01:13 version -- app/version.sh@14 -- # cut -f2 00:05:48.593 02:01:13 version -- app/version.sh@14 -- # tr -d '"' 00:05:48.593 02:01:13 version -- app/version.sh@20 -- # suffix=-pre 00:05:48.593 02:01:13 version -- app/version.sh@22 -- # version=25.1 00:05:48.593 02:01:13 version -- app/version.sh@25 -- # (( patch != 0 )) 00:05:48.593 02:01:13 version -- app/version.sh@28 -- # version=25.1rc0 00:05:48.593 02:01:13 version -- app/version.sh@30 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:05:48.593 02:01:13 version -- app/version.sh@30 -- # python3 -c 'import spdk; print(spdk.__version__)' 00:05:48.593 02:01:13 version -- app/version.sh@30 -- # py_version=25.1rc0 00:05:48.593 02:01:13 version -- app/version.sh@31 -- # [[ 25.1rc0 == \2\5\.\1\r\c\0 ]] 00:05:48.593 00:05:48.593 real 0m0.202s 00:05:48.593 user 0m0.129s 00:05:48.593 sys 0m0.100s 00:05:48.593 ************************************ 00:05:48.593 END TEST version 00:05:48.593 ************************************ 00:05:48.593 02:01:13 version -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:48.593 02:01:13 version -- common/autotest_common.sh@10 -- # set +x 00:05:48.593 02:01:13 -- spdk/autotest.sh@179 -- # '[' 0 -eq 1 ']' 00:05:48.593 02:01:13 -- spdk/autotest.sh@188 -- # [[ 0 -eq 1 ]] 00:05:48.593 02:01:13 -- spdk/autotest.sh@194 -- # uname -s 00:05:48.593 02:01:13 -- spdk/autotest.sh@194 -- # [[ Linux == Linux ]] 00:05:48.593 02:01:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:48.593 02:01:13 -- spdk/autotest.sh@195 -- # [[ 0 -eq 1 ]] 00:05:48.593 02:01:13 -- spdk/autotest.sh@207 -- # '[' 1 -eq 1 ']' 00:05:48.593 02:01:13 -- spdk/autotest.sh@208 -- # run_test blockdev_nvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:48.593 02:01:13 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:48.593 02:01:13 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:48.593 02:01:13 -- common/autotest_common.sh@10 -- # set +x 00:05:48.593 ************************************ 00:05:48.593 START TEST blockdev_nvme 00:05:48.593 ************************************ 00:05:48.593 02:01:13 blockdev_nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh nvme 00:05:48.593 * Looking for test storage... 00:05:48.593 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:05:48.593 02:01:13 blockdev_nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:05:48.593 02:01:13 blockdev_nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:05:48.593 02:01:13 blockdev_nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:05:48.593 02:01:13 blockdev_nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@336 -- # IFS=.-: 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@336 -- # read -ra ver1 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@337 -- # IFS=.-: 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@337 -- # read -ra ver2 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@338 -- # local 'op=<' 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@340 -- # ver1_l=2 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@341 -- # ver2_l=1 00:05:48.593 02:01:13 blockdev_nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@344 -- # case "$op" in 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@345 -- # : 1 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@365 -- # decimal 1 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=1 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 1 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:05:48.594 02:01:13 blockdev_nvme -- scripts/common.sh@366 -- # decimal 2 00:05:48.855 02:01:13 blockdev_nvme -- scripts/common.sh@353 -- # local d=2 00:05:48.855 02:01:13 blockdev_nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:05:48.855 02:01:13 blockdev_nvme -- scripts/common.sh@355 -- # echo 2 00:05:48.855 02:01:13 blockdev_nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:05:48.855 02:01:13 blockdev_nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:05:48.855 02:01:13 blockdev_nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:05:48.855 02:01:13 blockdev_nvme -- scripts/common.sh@368 -- # return 0 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:05:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.855 --rc genhtml_branch_coverage=1 00:05:48.855 --rc genhtml_function_coverage=1 00:05:48.855 --rc genhtml_legend=1 00:05:48.855 --rc geninfo_all_blocks=1 00:05:48.855 --rc geninfo_unexecuted_blocks=1 00:05:48.855 00:05:48.855 ' 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:05:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.855 --rc genhtml_branch_coverage=1 00:05:48.855 --rc genhtml_function_coverage=1 00:05:48.855 --rc genhtml_legend=1 00:05:48.855 --rc geninfo_all_blocks=1 00:05:48.855 --rc geninfo_unexecuted_blocks=1 00:05:48.855 00:05:48.855 ' 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:05:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.855 --rc genhtml_branch_coverage=1 00:05:48.855 --rc genhtml_function_coverage=1 00:05:48.855 --rc genhtml_legend=1 00:05:48.855 --rc geninfo_all_blocks=1 00:05:48.855 --rc geninfo_unexecuted_blocks=1 00:05:48.855 00:05:48.855 ' 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:05:48.855 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:05:48.855 --rc genhtml_branch_coverage=1 00:05:48.855 --rc genhtml_function_coverage=1 00:05:48.855 --rc genhtml_legend=1 00:05:48.855 --rc geninfo_all_blocks=1 00:05:48.855 --rc geninfo_unexecuted_blocks=1 00:05:48.855 00:05:48.855 ' 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:05:48.855 02:01:13 blockdev_nvme -- bdev/nbd_common.sh@6 -- # set -e 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@20 -- # : 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@711 -- # uname -s 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@719 -- # test_type=nvme 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@721 -- # dek= 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == bdev ]] 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@727 -- # [[ nvme == crypto_* ]] 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=61593 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@49 -- # waitforlisten 61593 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@835 -- # '[' -z 61593 ']' 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:48.855 02:01:13 blockdev_nvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:48.855 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:48.855 02:01:13 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:48.855 [2024-12-15 02:01:13.441275] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:48.855 [2024-12-15 02:01:13.441496] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61593 ] 00:05:48.855 [2024-12-15 02:01:13.601466] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:49.115 [2024-12-15 02:01:13.697523] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:49.687 02:01:14 blockdev_nvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:49.687 02:01:14 blockdev_nvme -- common/autotest_common.sh@868 -- # return 0 00:05:49.687 02:01:14 blockdev_nvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:05:49.687 02:01:14 blockdev_nvme -- bdev/blockdev.sh@736 -- # setup_nvme_conf 00:05:49.687 02:01:14 blockdev_nvme -- bdev/blockdev.sh@81 -- # local json 00:05:49.687 02:01:14 blockdev_nvme -- bdev/blockdev.sh@82 -- # mapfile -t json 00:05:49.687 02:01:14 blockdev_nvme -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:05:49.687 02:01:14 blockdev_nvme -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:05:49.687 02:01:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.687 02:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.948 02:01:14 blockdev_nvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.948 02:01:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # cat 00:05:49.948 02:01:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.948 02:01:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.948 02:01:14 blockdev_nvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:49.948 02:01:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:05:49.948 02:01:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:05:49.948 02:01:14 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:49.948 02:01:14 blockdev_nvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:05:50.211 02:01:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:05:50.211 02:01:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:05:50.211 02:01:14 blockdev_nvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "c67e7491-12a5-4f4e-a238-0919aefeae8a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "c67e7491-12a5-4f4e-a238-0919aefeae8a",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1",' ' "aliases": [' ' "8e2fa3de-3aa1-4974-ae0b-0f527d4dcf80"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "8e2fa3de-3aa1-4974-ae0b-0f527d4dcf80",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:11.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:11.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12341",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12341",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "a8c6ffdd-5a15-47fa-83b0-240dd2ed0520"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "a8c6ffdd-5a15-47fa-83b0-240dd2ed0520",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "08da5cb2-35a8-4217-bd4c-a5d826749166"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "08da5cb2-35a8-4217-bd4c-a5d826749166",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "5ba1210b-5cd8-4ac9-a143-e37d3b38d515"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "5ba1210b-5cd8-4ac9-a143-e37d3b38d515",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "af9a6a44-63d6-4ef9-a98c-38b68f6a297c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "af9a6a44-63d6-4ef9-a98c-38b68f6a297c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:05:50.211 02:01:14 blockdev_nvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:05:50.211 02:01:14 blockdev_nvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:05:50.211 02:01:14 blockdev_nvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:05:50.211 02:01:14 blockdev_nvme -- bdev/blockdev.sh@791 -- # killprocess 61593 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@954 -- # '[' -z 61593 ']' 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@958 -- # kill -0 61593 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@959 -- # uname 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61593 00:05:50.211 killing process with pid 61593 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61593' 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@973 -- # kill 61593 00:05:50.211 02:01:14 blockdev_nvme -- common/autotest_common.sh@978 -- # wait 61593 00:05:51.585 02:01:16 blockdev_nvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:05:51.585 02:01:16 blockdev_nvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:51.585 02:01:16 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:05:51.585 02:01:16 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:51.585 02:01:16 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:51.585 ************************************ 00:05:51.585 START TEST bdev_hello_world 00:05:51.585 ************************************ 00:05:51.585 02:01:16 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:05:51.585 [2024-12-15 02:01:16.111115] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:51.585 [2024-12-15 02:01:16.111248] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61677 ] 00:05:51.585 [2024-12-15 02:01:16.266267] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:51.585 [2024-12-15 02:01:16.341025] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.151 [2024-12-15 02:01:16.833108] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:05:52.151 [2024-12-15 02:01:16.833150] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:05:52.151 [2024-12-15 02:01:16.833170] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:05:52.151 [2024-12-15 02:01:16.835164] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:05:52.151 [2024-12-15 02:01:16.835586] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:05:52.151 [2024-12-15 02:01:16.835609] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:05:52.151 [2024-12-15 02:01:16.835823] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:05:52.151 00:05:52.151 [2024-12-15 02:01:16.835841] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:05:52.721 00:05:52.721 real 0m1.335s 00:05:52.721 user 0m1.084s 00:05:52.721 sys 0m0.147s 00:05:52.721 ************************************ 00:05:52.721 END TEST bdev_hello_world 00:05:52.721 ************************************ 00:05:52.721 02:01:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:52.721 02:01:17 blockdev_nvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:05:52.721 02:01:17 blockdev_nvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:05:52.721 02:01:17 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:05:52.721 02:01:17 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:52.721 02:01:17 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:52.721 ************************************ 00:05:52.721 START TEST bdev_bounds 00:05:52.721 ************************************ 00:05:52.721 Process bdevio pid: 61708 00:05:52.721 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:05:52.721 02:01:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:05:52.721 02:01:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=61708 00:05:52.721 02:01:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:05:52.721 02:01:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 61708' 00:05:52.721 02:01:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 61708 00:05:52.722 02:01:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 61708 ']' 00:05:52.722 02:01:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:05:52.722 02:01:17 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:52.722 02:01:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:52.722 02:01:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:05:52.722 02:01:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:52.722 02:01:17 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:52.981 [2024-12-15 02:01:17.500254] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:52.981 [2024-12-15 02:01:17.500369] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid61708 ] 00:05:52.981 [2024-12-15 02:01:17.654623] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:05:52.981 [2024-12-15 02:01:17.730968] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:05:52.981 [2024-12-15 02:01:17.731260] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:52.981 [2024-12-15 02:01:17.731267] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:05:53.553 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:53.553 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:05:53.553 02:01:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:05:53.814 I/O targets: 00:05:53.814 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:05:53.814 Nvme1n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:05:53.814 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:53.814 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:53.814 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:05:53.814 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:05:53.814 00:05:53.814 00:05:53.814 CUnit - A unit testing framework for C - Version 2.1-3 00:05:53.814 http://cunit.sourceforge.net/ 00:05:53.814 00:05:53.814 00:05:53.814 Suite: bdevio tests on: Nvme3n1 00:05:53.814 Test: blockdev write read block ...passed 00:05:53.814 Test: blockdev write zeroes read block ...passed 00:05:53.814 Test: blockdev write zeroes read no split ...passed 00:05:53.814 Test: blockdev write zeroes read split ...passed 00:05:53.814 Test: blockdev write zeroes read split partial ...passed 00:05:53.815 Test: blockdev reset ...[2024-12-15 02:01:18.417236] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:05:53.815 passed 00:05:53.815 Test: blockdev write read 8 blocks ...[2024-12-15 02:01:18.420766] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:05:53.815 passed 00:05:53.815 Test: blockdev write read size > 128k ...passed 00:05:53.815 Test: blockdev write read invalid size ...passed 00:05:53.815 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.815 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.815 Test: blockdev write read max offset ...passed 00:05:53.815 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.815 Test: blockdev writev readv 8 blocks ...passed 00:05:53.815 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.815 Test: blockdev writev readv block ...passed 00:05:53.815 Test: blockdev writev readv size > 128k ...passed 00:05:53.815 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.815 Test: blockdev comparev and writev ...[2024-12-15 02:01:18.431073] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2bb80a000 len:0x1000 00:05:53.815 [2024-12-15 02:01:18.431120] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:53.815 passed 00:05:53.815 Test: blockdev nvme passthru rw ...passed 00:05:53.815 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:01:18.432456] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:53.815 [2024-12-15 02:01:18.432492] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:53.815 passed 00:05:53.815 Test: blockdev nvme admin passthru ...passed 00:05:53.815 Test: blockdev copy ...passed 00:05:53.815 Suite: bdevio tests on: Nvme2n3 00:05:53.815 Test: blockdev write read block ...passed 00:05:53.815 Test: blockdev write zeroes read block ...passed 00:05:53.815 Test: blockdev write zeroes read no split ...passed 00:05:53.815 Test: blockdev write zeroes read split ...passed 00:05:53.815 Test: blockdev write zeroes read split partial ...passed 00:05:53.815 Test: blockdev reset ...[2024-12-15 02:01:18.484023] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:53.815 [2024-12-15 02:01:18.487394] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:05:53.815 passed 00:05:53.815 Test: blockdev write read 8 blocks ...passed 00:05:53.815 Test: blockdev write read size > 128k ...passed 00:05:53.815 Test: blockdev write read invalid size ...passed 00:05:53.815 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.815 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.815 Test: blockdev write read max offset ...passed 00:05:53.815 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:53.815 Test: blockdev writev readv 8 blocks ...passed 00:05:53.815 Test: blockdev writev readv 30 x 1block ...passed 00:05:53.815 Test: blockdev writev readv block ...passed 00:05:53.815 Test: blockdev writev readv size > 128k ...passed 00:05:53.815 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:53.815 Test: blockdev comparev and writev ...[2024-12-15 02:01:18.503775] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x29ea06000 len:0x1000 00:05:53.815 [2024-12-15 02:01:18.503822] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:53.815 passed 00:05:53.815 Test: blockdev nvme passthru rw ...passed 00:05:53.815 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:01:18.506251] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1passed 00:05:53.815 Test: blockdev nvme admin passthru ... cid:190 PRP1 0x0 PRP2 0x0 00:05:53.815 [2024-12-15 02:01:18.506358] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:53.815 passed 00:05:53.815 Test: blockdev copy ...passed 00:05:53.815 Suite: bdevio tests on: Nvme2n2 00:05:53.815 Test: blockdev write read block ...passed 00:05:53.815 Test: blockdev write zeroes read block ...passed 00:05:53.815 Test: blockdev write zeroes read no split ...passed 00:05:53.815 Test: blockdev write zeroes read split ...passed 00:05:53.815 Test: blockdev write zeroes read split partial ...passed 00:05:53.815 Test: blockdev reset ...[2024-12-15 02:01:18.566595] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:53.815 [2024-12-15 02:01:18.570659] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:53.815 Test: blockdev write read 8 blocks ...uccessful. 00:05:53.815 passed 00:05:53.815 Test: blockdev write read size > 128k ...passed 00:05:53.815 Test: blockdev write read invalid size ...passed 00:05:53.815 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:53.815 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:53.815 Test: blockdev write read max offset ...passed 00:05:54.077 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.077 Test: blockdev writev readv 8 blocks ...passed 00:05:54.077 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.077 Test: blockdev writev readv block ...passed 00:05:54.077 Test: blockdev writev readv size > 128k ...passed 00:05:54.077 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.077 Test: blockdev comparev and writev ...[2024-12-15 02:01:18.589088] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d643c000 len:0x1000 00:05:54.077 [2024-12-15 02:01:18.589132] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:54.077 passed 00:05:54.077 Test: blockdev nvme passthru rw ...passed 00:05:54.077 Test: blockdev nvme passthru vendor specific ...passed 00:05:54.077 Test: blockdev nvme admin passthru ...[2024-12-15 02:01:18.591067] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:05:54.077 [2024-12-15 02:01:18.591100] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:54.077 passed 00:05:54.077 Test: blockdev copy ...passed 00:05:54.077 Suite: bdevio tests on: Nvme2n1 00:05:54.077 Test: blockdev write read block ...passed 00:05:54.077 Test: blockdev write zeroes read block ...passed 00:05:54.077 Test: blockdev write zeroes read no split ...passed 00:05:54.077 Test: blockdev write zeroes read split ...passed 00:05:54.077 Test: blockdev write zeroes read split partial ...passed 00:05:54.077 Test: blockdev reset ...[2024-12-15 02:01:18.648492] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:05:54.077 [2024-12-15 02:01:18.652753] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller spassed 00:05:54.077 Test: blockdev write read 8 blocks ...uccessful. 00:05:54.077 passed 00:05:54.077 Test: blockdev write read size > 128k ...passed 00:05:54.077 Test: blockdev write read invalid size ...passed 00:05:54.077 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.077 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.077 Test: blockdev write read max offset ...passed 00:05:54.077 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.077 Test: blockdev writev readv 8 blocks ...passed 00:05:54.077 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.077 Test: blockdev writev readv block ...passed 00:05:54.077 Test: blockdev writev readv size > 128k ...passed 00:05:54.077 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.077 Test: blockdev comparev and writev ...[2024-12-15 02:01:18.670353] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6438000 len:0x1000 00:05:54.077 [2024-12-15 02:01:18.670393] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:54.077 passed 00:05:54.077 Test: blockdev nvme passthru rw ...passed 00:05:54.077 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:01:18.672464] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:05:54.077 Test: blockdev nvme admin passthru ...RP2 0x0 00:05:54.077 [2024-12-15 02:01:18.672564] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:54.077 passed 00:05:54.077 Test: blockdev copy ...passed 00:05:54.077 Suite: bdevio tests on: Nvme1n1 00:05:54.077 Test: blockdev write read block ...passed 00:05:54.077 Test: blockdev write zeroes read block ...passed 00:05:54.077 Test: blockdev write zeroes read no split ...passed 00:05:54.077 Test: blockdev write zeroes read split ...passed 00:05:54.077 Test: blockdev write zeroes read split partial ...passed 00:05:54.077 Test: blockdev reset ...[2024-12-15 02:01:18.730718] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:05:54.077 [2024-12-15 02:01:18.735807] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller spassed 00:05:54.077 Test: blockdev write read 8 blocks ...uccessful. 00:05:54.077 passed 00:05:54.077 Test: blockdev write read size > 128k ...passed 00:05:54.077 Test: blockdev write read invalid size ...passed 00:05:54.077 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.077 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.077 Test: blockdev write read max offset ...passed 00:05:54.077 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.077 Test: blockdev writev readv 8 blocks ...passed 00:05:54.077 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.077 Test: blockdev writev readv block ...passed 00:05:54.077 Test: blockdev writev readv size > 128k ...passed 00:05:54.077 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.077 Test: blockdev comparev and writev ...[2024-12-15 02:01:18.753555] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2d6434000 len:0x1000 00:05:54.077 [2024-12-15 02:01:18.753604] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:05:54.077 passed 00:05:54.077 Test: blockdev nvme passthru rw ...passed 00:05:54.077 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:01:18.755665] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 Ppassed 00:05:54.077 Test: blockdev nvme admin passthru ...RP2 0x0 00:05:54.077 [2024-12-15 02:01:18.755771] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:05:54.077 passed 00:05:54.077 Test: blockdev copy ...passed 00:05:54.077 Suite: bdevio tests on: Nvme0n1 00:05:54.077 Test: blockdev write read block ...passed 00:05:54.077 Test: blockdev write zeroes read block ...passed 00:05:54.077 Test: blockdev write zeroes read no split ...passed 00:05:54.077 Test: blockdev write zeroes read split ...passed 00:05:54.077 Test: blockdev write zeroes read split partial ...passed 00:05:54.077 Test: blockdev reset ...[2024-12-15 02:01:18.814747] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:05:54.077 [2024-12-15 02:01:18.818433] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller spassed 00:05:54.077 Test: blockdev write read 8 blocks ...uccessful. 00:05:54.077 passed 00:05:54.077 Test: blockdev write read size > 128k ...passed 00:05:54.077 Test: blockdev write read invalid size ...passed 00:05:54.077 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:05:54.077 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:05:54.077 Test: blockdev write read max offset ...passed 00:05:54.077 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:05:54.077 Test: blockdev writev readv 8 blocks ...passed 00:05:54.077 Test: blockdev writev readv 30 x 1block ...passed 00:05:54.077 Test: blockdev writev readv block ...passed 00:05:54.077 Test: blockdev writev readv size > 128k ...passed 00:05:54.077 Test: blockdev writev readv size > 128k in two iovs ...passed 00:05:54.077 Test: blockdev comparev and writev ...passed 00:05:54.077 Test: blockdev nvme passthru rw ...[2024-12-15 02:01:18.834501] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:05:54.077 separate metadata which is not supported yet. 00:05:54.077 passed 00:05:54.077 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:01:18.835941] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:05:54.077 [2024-12-15 02:01:18.836036] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:05:54.077 passed 00:05:54.339 Test: blockdev nvme admin passthru ...passed 00:05:54.339 Test: blockdev copy ...passed 00:05:54.339 00:05:54.339 Run Summary: Type Total Ran Passed Failed Inactive 00:05:54.339 suites 6 6 n/a 0 0 00:05:54.339 tests 138 138 138 0 0 00:05:54.339 asserts 893 893 893 0 n/a 00:05:54.339 00:05:54.339 Elapsed time = 1.195 seconds 00:05:54.339 0 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 61708 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 61708 ']' 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 61708 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61708 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61708' 00:05:54.339 killing process with pid 61708 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 61708 00:05:54.339 02:01:18 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 61708 00:05:54.912 02:01:19 blockdev_nvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:05:54.912 00:05:54.912 real 0m2.132s 00:05:54.912 user 0m5.388s 00:05:54.912 sys 0m0.257s 00:05:54.912 02:01:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:05:54.912 02:01:19 blockdev_nvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 ************************************ 00:05:54.912 END TEST bdev_bounds 00:05:54.912 ************************************ 00:05:54.912 02:01:19 blockdev_nvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:54.912 02:01:19 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:05:54.912 02:01:19 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:05:54.912 02:01:19 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 ************************************ 00:05:54.912 START TEST bdev_nbd 00:05:54.912 ************************************ 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:05:54.912 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=61762 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 61762 /var/tmp/spdk-nbd.sock 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 61762 ']' 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:05:54.912 02:01:19 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:05:55.173 [2024-12-15 02:01:19.703736] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:05:55.173 [2024-12-15 02:01:19.703843] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:05:55.173 [2024-12-15 02:01:19.866662] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:05:55.433 [2024-12-15 02:01:19.963475] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.006 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.264 1+0 records in 00:05:56.264 1+0 records out 00:05:56.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000575042 s, 7.1 MB/s 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.264 02:01:20 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.264 1+0 records in 00:05:56.264 1+0 records out 00:05:56.264 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000909733 s, 4.5 MB/s 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.264 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.522 1+0 records in 00:05:56.522 1+0 records out 00:05:56.522 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000610949 s, 6.7 MB/s 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.522 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:56.825 1+0 records in 00:05:56.825 1+0 records out 00:05:56.825 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00070259 s, 5.8 MB/s 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:56.825 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:56.826 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:56.826 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:56.826 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.087 1+0 records in 00:05:57.087 1+0 records out 00:05:57.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000436426 s, 9.4 MB/s 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:57.087 1+0 records in 00:05:57.087 1+0 records out 00:05:57.087 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000851795 s, 4.8 MB/s 00:05:57.087 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.088 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:57.088 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:57.088 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:57.088 02:01:21 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:57.088 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:05:57.088 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:05:57.088 02:01:21 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd0", 00:05:57.349 "bdev_name": "Nvme0n1" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd1", 00:05:57.349 "bdev_name": "Nvme1n1" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd2", 00:05:57.349 "bdev_name": "Nvme2n1" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd3", 00:05:57.349 "bdev_name": "Nvme2n2" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd4", 00:05:57.349 "bdev_name": "Nvme2n3" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd5", 00:05:57.349 "bdev_name": "Nvme3n1" 00:05:57.349 } 00:05:57.349 ]' 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd0", 00:05:57.349 "bdev_name": "Nvme0n1" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd1", 00:05:57.349 "bdev_name": "Nvme1n1" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd2", 00:05:57.349 "bdev_name": "Nvme2n1" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd3", 00:05:57.349 "bdev_name": "Nvme2n2" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd4", 00:05:57.349 "bdev_name": "Nvme2n3" 00:05:57.349 }, 00:05:57.349 { 00:05:57.349 "nbd_device": "/dev/nbd5", 00:05:57.349 "bdev_name": "Nvme3n1" 00:05:57.349 } 00:05:57.349 ]' 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.349 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.611 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:57.872 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.131 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:05:58.390 02:01:22 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:05:58.650 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:05:58.912 /dev/nbd0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:58.912 1+0 records in 00:05:58.912 1+0 records out 00:05:58.912 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000826862 s, 5.0 MB/s 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:58.912 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1 /dev/nbd1 00:05:59.172 /dev/nbd1 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:59.172 1+0 records in 00:05:59.172 1+0 records out 00:05:59.172 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000379158 s, 10.8 MB/s 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:59.172 02:01:23 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd10 00:05:59.433 /dev/nbd10 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:59.433 1+0 records in 00:05:59.433 1+0 records out 00:05:59.433 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00110202 s, 3.7 MB/s 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:59.433 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd11 00:05:59.694 /dev/nbd11 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:59.694 1+0 records in 00:05:59.694 1+0 records out 00:05:59.694 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000318191 s, 12.9 MB/s 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:59.694 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd12 00:05:59.956 /dev/nbd12 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:05:59.956 1+0 records in 00:05:59.956 1+0 records out 00:05:59.956 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0010952 s, 3.7 MB/s 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:05:59.956 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd13 00:06:00.217 /dev/nbd13 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:00.217 1+0 records in 00:06:00.217 1+0 records out 00:06:00.217 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00087511 s, 4.7 MB/s 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:00.217 02:01:24 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd0", 00:06:00.480 "bdev_name": "Nvme0n1" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd1", 00:06:00.480 "bdev_name": "Nvme1n1" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd10", 00:06:00.480 "bdev_name": "Nvme2n1" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd11", 00:06:00.480 "bdev_name": "Nvme2n2" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd12", 00:06:00.480 "bdev_name": "Nvme2n3" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd13", 00:06:00.480 "bdev_name": "Nvme3n1" 00:06:00.480 } 00:06:00.480 ]' 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd0", 00:06:00.480 "bdev_name": "Nvme0n1" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd1", 00:06:00.480 "bdev_name": "Nvme1n1" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd10", 00:06:00.480 "bdev_name": "Nvme2n1" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd11", 00:06:00.480 "bdev_name": "Nvme2n2" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd12", 00:06:00.480 "bdev_name": "Nvme2n3" 00:06:00.480 }, 00:06:00.480 { 00:06:00.480 "nbd_device": "/dev/nbd13", 00:06:00.480 "bdev_name": "Nvme3n1" 00:06:00.480 } 00:06:00.480 ]' 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:00.480 /dev/nbd1 00:06:00.480 /dev/nbd10 00:06:00.480 /dev/nbd11 00:06:00.480 /dev/nbd12 00:06:00.480 /dev/nbd13' 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:00.480 /dev/nbd1 00:06:00.480 /dev/nbd10 00:06:00.480 /dev/nbd11 00:06:00.480 /dev/nbd12 00:06:00.480 /dev/nbd13' 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:00.480 256+0 records in 00:06:00.480 256+0 records out 00:06:00.480 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00678978 s, 154 MB/s 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.480 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:00.742 256+0 records in 00:06:00.742 256+0 records out 00:06:00.742 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.219167 s, 4.8 MB/s 00:06:00.742 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:00.742 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:01.085 256+0 records in 00:06:01.085 256+0 records out 00:06:01.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224137 s, 4.7 MB/s 00:06:01.085 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.085 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:01.085 256+0 records in 00:06:01.085 256+0 records out 00:06:01.085 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.224086 s, 4.7 MB/s 00:06:01.085 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.085 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:01.376 256+0 records in 00:06:01.376 256+0 records out 00:06:01.376 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.197728 s, 5.3 MB/s 00:06:01.376 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.376 02:01:25 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:01.638 256+0 records in 00:06:01.638 256+0 records out 00:06:01.638 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.222331 s, 4.7 MB/s 00:06:01.638 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:01.638 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:01.899 256+0 records in 00:06:01.899 256+0 records out 00:06:01.899 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.225212 s, 4.7 MB/s 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:01.899 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.161 02:01:26 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.422 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.683 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:02.942 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:03.201 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:03.460 02:01:27 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:03.460 malloc_lvol_verify 00:06:03.460 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:03.718 1a8d56ac-1671-4bb7-8b56-135fa6005ba4 00:06:03.718 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:03.976 0bdc44f9-9121-4de7-a235-cab2dade1518 00:06:03.976 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:03.976 /dev/nbd0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:04.234 mke2fs 1.47.0 (5-Feb-2023) 00:06:04.234 Discarding device blocks: 0/4096 done 00:06:04.234 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:04.234 00:06:04.234 Allocating group tables: 0/1 done 00:06:04.234 Writing inode tables: 0/1 done 00:06:04.234 Creating journal (1024 blocks): done 00:06:04.234 Writing superblocks and filesystem accounting information: 0/1 done 00:06:04.234 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 61762 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 61762 ']' 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 61762 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:04.234 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 61762 00:06:04.492 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:04.492 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:04.492 killing process with pid 61762 00:06:04.492 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 61762' 00:06:04.492 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 61762 00:06:04.492 02:01:28 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 61762 00:06:05.061 02:01:29 blockdev_nvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:05.061 00:06:05.061 real 0m9.967s 00:06:05.061 user 0m13.725s 00:06:05.061 sys 0m3.157s 00:06:05.061 ************************************ 00:06:05.061 END TEST bdev_nbd 00:06:05.061 ************************************ 00:06:05.061 02:01:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:05.061 02:01:29 blockdev_nvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:05.061 02:01:29 blockdev_nvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:05.061 02:01:29 blockdev_nvme -- bdev/blockdev.sh@801 -- # '[' nvme = nvme ']' 00:06:05.061 skipping fio tests on NVMe due to multi-ns failures. 00:06:05.061 02:01:29 blockdev_nvme -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:05.061 02:01:29 blockdev_nvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:05.061 02:01:29 blockdev_nvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:05.061 02:01:29 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:05.061 02:01:29 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:05.061 02:01:29 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:05.061 ************************************ 00:06:05.061 START TEST bdev_verify 00:06:05.061 ************************************ 00:06:05.061 02:01:29 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:05.061 [2024-12-15 02:01:29.716228] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:05.061 [2024-12-15 02:01:29.716334] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62140 ] 00:06:05.320 [2024-12-15 02:01:29.866999] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:05.320 [2024-12-15 02:01:29.941239] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:05.320 [2024-12-15 02:01:29.941269] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:05.886 Running I/O for 5 seconds... 00:06:08.210 23552.00 IOPS, 92.00 MiB/s [2024-12-15T02:01:33.916Z] 21024.00 IOPS, 82.12 MiB/s [2024-12-15T02:01:34.859Z] 20266.67 IOPS, 79.17 MiB/s [2024-12-15T02:01:35.801Z] 19984.00 IOPS, 78.06 MiB/s [2024-12-15T02:01:35.801Z] 19686.40 IOPS, 76.90 MiB/s 00:06:11.036 Latency(us) 00:06:11.036 [2024-12-15T02:01:35.801Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:11.036 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.036 Verification LBA range: start 0x0 length 0xbd0bd 00:06:11.036 Nvme0n1 : 5.08 1638.19 6.40 0.00 0.00 77981.43 10082.46 67350.84 00:06:11.036 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.036 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:11.036 Nvme0n1 : 5.09 1610.89 6.29 0.00 0.00 78575.79 8771.74 68157.44 00:06:11.036 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.036 Verification LBA range: start 0x0 length 0xa0000 00:06:11.036 Nvme1n1 : 5.08 1637.23 6.40 0.00 0.00 77908.16 13107.20 63721.16 00:06:11.037 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0xa0000 length 0xa0000 00:06:11.037 Nvme1n1 : 5.09 1609.63 6.29 0.00 0.00 78569.56 6125.10 70173.93 00:06:11.037 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0x0 length 0x80000 00:06:11.037 Nvme2n1 : 5.09 1636.02 6.39 0.00 0.00 77839.11 14115.45 63721.16 00:06:11.037 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0x80000 length 0x80000 00:06:11.037 Nvme2n1 : 5.07 1614.33 6.31 0.00 0.00 79068.11 12149.37 66947.54 00:06:11.037 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0x0 length 0x80000 00:06:11.037 Nvme2n2 : 5.09 1634.74 6.39 0.00 0.00 77749.53 14317.10 65334.35 00:06:11.037 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0x80000 length 0x80000 00:06:11.037 Nvme2n2 : 5.08 1613.77 6.30 0.00 0.00 78945.94 14317.10 65737.65 00:06:11.037 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0x0 length 0x80000 00:06:11.037 Nvme2n3 : 5.09 1633.54 6.38 0.00 0.00 77673.24 13510.50 64931.05 00:06:11.037 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0x80000 length 0x80000 00:06:11.037 Nvme2n3 : 5.08 1613.17 6.30 0.00 0.00 78810.73 15728.64 63721.16 00:06:11.037 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0x0 length 0x20000 00:06:11.037 Nvme3n1 : 5.09 1633.11 6.38 0.00 0.00 77568.48 8217.21 67350.84 00:06:11.037 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:11.037 Verification LBA range: start 0x20000 length 0x20000 00:06:11.037 Nvme3n1 : 5.08 1612.10 6.30 0.00 0.00 78671.12 13611.32 64931.05 00:06:11.037 [2024-12-15T02:01:35.802Z] =================================================================================================================== 00:06:11.037 [2024-12-15T02:01:35.802Z] Total : 19486.72 76.12 0.00 0.00 78276.27 6125.10 70173.93 00:06:12.025 00:06:12.025 real 0m7.058s 00:06:12.025 user 0m13.246s 00:06:12.025 sys 0m0.213s 00:06:12.025 02:01:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:12.025 02:01:36 blockdev_nvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:12.025 ************************************ 00:06:12.025 END TEST bdev_verify 00:06:12.025 ************************************ 00:06:12.025 02:01:36 blockdev_nvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:12.025 02:01:36 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:12.025 02:01:36 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:12.025 02:01:36 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:12.300 ************************************ 00:06:12.300 START TEST bdev_verify_big_io 00:06:12.300 ************************************ 00:06:12.300 02:01:36 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:12.300 [2024-12-15 02:01:36.840931] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:12.300 [2024-12-15 02:01:36.841042] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62238 ] 00:06:12.300 [2024-12-15 02:01:36.998863] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:12.560 [2024-12-15 02:01:37.095691] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:12.560 [2024-12-15 02:01:37.095766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:13.132 Running I/O for 5 seconds... 00:06:17.810 877.00 IOPS, 54.81 MiB/s [2024-12-15T02:01:42.832Z] 2162.00 IOPS, 135.12 MiB/s [2024-12-15T02:01:43.764Z] 1782.00 IOPS, 111.38 MiB/s [2024-12-15T02:01:43.764Z] 2110.50 IOPS, 131.91 MiB/s 00:06:18.999 Latency(us) 00:06:18.999 [2024-12-15T02:01:43.764Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:18.999 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:18.999 Verification LBA range: start 0x0 length 0xbd0b 00:06:18.999 Nvme0n1 : 5.70 115.00 7.19 0.00 0.00 1064738.99 33272.12 1509949.44 00:06:18.999 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:18.999 Verification LBA range: start 0xbd0b length 0xbd0b 00:06:18.999 Nvme0n1 : 5.64 124.87 7.80 0.00 0.00 987492.04 52428.80 1025991.29 00:06:18.999 Job: Nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:18.999 Verification LBA range: start 0x0 length 0xa000 00:06:18.999 Nvme1n1 : 5.88 118.01 7.38 0.00 0.00 1007771.38 53638.70 1548666.09 00:06:18.999 Job: Nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:18.999 Verification LBA range: start 0xa000 length 0xa000 00:06:18.999 Nvme1n1 : 5.78 128.83 8.05 0.00 0.00 932029.79 70173.93 858219.13 00:06:18.999 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:18.999 Verification LBA range: start 0x0 length 0x8000 00:06:18.999 Nvme2n1 : 5.88 121.71 7.61 0.00 0.00 956162.07 54445.29 1587382.74 00:06:18.999 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.000 Verification LBA range: start 0x8000 length 0x8000 00:06:19.000 Nvme2n1 : 5.78 129.50 8.09 0.00 0.00 899474.19 70980.53 1013085.74 00:06:19.000 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.000 Verification LBA range: start 0x0 length 0x8000 00:06:19.000 Nvme2n2 : 5.91 126.46 7.90 0.00 0.00 893796.45 20064.10 1626099.40 00:06:19.000 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.000 Verification LBA range: start 0x8000 length 0x8000 00:06:19.000 Nvme2n2 : 5.78 132.92 8.31 0.00 0.00 855366.37 64527.75 903388.55 00:06:19.000 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.000 Verification LBA range: start 0x0 length 0x8000 00:06:19.000 Nvme2n3 : 5.93 131.97 8.25 0.00 0.00 828334.30 14417.92 1664816.05 00:06:19.000 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.000 Verification LBA range: start 0x8000 length 0x8000 00:06:19.000 Nvme2n3 : 5.85 139.60 8.73 0.00 0.00 793241.92 20164.92 1464780.01 00:06:19.000 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:06:19.000 Verification LBA range: start 0x0 length 0x2000 00:06:19.000 Nvme3n1 : 5.96 159.26 9.95 0.00 0.00 666073.59 712.07 1161499.57 00:06:19.000 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:06:19.000 Verification LBA range: start 0x2000 length 0x2000 00:06:19.000 Nvme3n1 : 5.90 162.64 10.17 0.00 0.00 661137.02 579.74 980821.86 00:06:19.000 [2024-12-15T02:01:43.765Z] =================================================================================================================== 00:06:19.000 [2024-12-15T02:01:43.765Z] Total : 1590.75 99.42 0.00 0.00 864964.56 579.74 1664816.05 00:06:20.905 00:06:20.905 real 0m8.495s 00:06:20.905 user 0m16.041s 00:06:20.905 sys 0m0.243s 00:06:20.905 ************************************ 00:06:20.905 END TEST bdev_verify_big_io 00:06:20.905 ************************************ 00:06:20.905 02:01:45 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:20.905 02:01:45 blockdev_nvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:06:20.905 02:01:45 blockdev_nvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:20.905 02:01:45 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:20.905 02:01:45 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:20.905 02:01:45 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:20.905 ************************************ 00:06:20.905 START TEST bdev_write_zeroes 00:06:20.905 ************************************ 00:06:20.905 02:01:45 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:20.905 [2024-12-15 02:01:45.395088] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:20.905 [2024-12-15 02:01:45.395376] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62347 ] 00:06:20.905 [2024-12-15 02:01:45.558658] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:21.166 [2024-12-15 02:01:45.678568] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:21.736 Running I/O for 1 seconds... 00:06:22.934 62055.00 IOPS, 242.40 MiB/s 00:06:22.934 Latency(us) 00:06:22.934 [2024-12-15T02:01:47.699Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:22.934 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:22.934 Nvme0n1 : 1.29 8056.56 31.47 0.00 0.00 14443.41 5066.44 383940.14 00:06:22.934 Job: Nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:22.934 Nvme1n1 : 1.13 9378.72 36.64 0.00 0.00 13191.27 7612.26 117763.15 00:06:22.934 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:22.934 Nvme2n1 : 1.13 9368.48 36.60 0.00 0.00 13166.93 7763.50 116149.96 00:06:22.934 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:22.934 Nvme2n2 : 1.13 9358.74 36.56 0.00 0.00 13114.64 7763.50 115343.36 00:06:22.934 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:22.934 Nvme2n3 : 1.13 9348.95 36.52 0.00 0.00 13096.44 7662.67 113730.17 00:06:22.934 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:06:22.934 Nvme3n1 : 1.13 9339.34 36.48 0.00 0.00 13081.32 6805.66 112923.57 00:06:22.934 [2024-12-15T02:01:47.699Z] =================================================================================================================== 00:06:22.934 [2024-12-15T02:01:47.699Z] Total : 54850.79 214.26 0.00 0.00 13345.55 5066.44 383940.14 00:06:23.877 00:06:23.877 real 0m3.085s 00:06:23.877 user 0m2.726s 00:06:23.877 sys 0m0.237s 00:06:23.877 02:01:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:23.877 ************************************ 00:06:23.877 END TEST bdev_write_zeroes 00:06:23.877 ************************************ 00:06:23.877 02:01:48 blockdev_nvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:06:23.877 02:01:48 blockdev_nvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:23.877 02:01:48 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:23.877 02:01:48 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:23.877 02:01:48 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:23.877 ************************************ 00:06:23.877 START TEST bdev_json_nonenclosed 00:06:23.877 ************************************ 00:06:23.877 02:01:48 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:23.877 [2024-12-15 02:01:48.556510] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:23.877 [2024-12-15 02:01:48.556657] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62402 ] 00:06:24.138 [2024-12-15 02:01:48.721544] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.138 [2024-12-15 02:01:48.848884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.138 [2024-12-15 02:01:48.849206] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:06:24.138 [2024-12-15 02:01:48.849236] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:24.138 [2024-12-15 02:01:48.849247] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.400 ************************************ 00:06:24.400 END TEST bdev_json_nonenclosed 00:06:24.400 ************************************ 00:06:24.400 00:06:24.400 real 0m0.559s 00:06:24.400 user 0m0.344s 00:06:24.400 sys 0m0.108s 00:06:24.400 02:01:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.400 02:01:49 blockdev_nvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:06:24.400 02:01:49 blockdev_nvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.400 02:01:49 blockdev_nvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:06:24.400 02:01:49 blockdev_nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:24.400 02:01:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:24.400 ************************************ 00:06:24.400 START TEST bdev_json_nonarray 00:06:24.400 ************************************ 00:06:24.400 02:01:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:06:24.661 [2024-12-15 02:01:49.177422] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:24.661 [2024-12-15 02:01:49.177570] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62433 ] 00:06:24.661 [2024-12-15 02:01:49.342757] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:24.922 [2024-12-15 02:01:49.447359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:24.922 [2024-12-15 02:01:49.447441] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:06:24.922 [2024-12-15 02:01:49.447458] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:06:24.922 [2024-12-15 02:01:49.447467] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:06:24.922 00:06:24.922 real 0m0.522s 00:06:24.922 user 0m0.295s 00:06:24.922 sys 0m0.122s 00:06:24.922 ************************************ 00:06:24.922 END TEST bdev_json_nonarray 00:06:24.922 ************************************ 00:06:24.922 02:01:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.922 02:01:49 blockdev_nvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@824 -- # [[ nvme == bdev ]] 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@832 -- # [[ nvme == gpt ]] 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@836 -- # [[ nvme == crypto_sw ]] 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@849 -- # cleanup 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@26 -- # [[ nvme == rbd ]] 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@30 -- # [[ nvme == daos ]] 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@34 -- # [[ nvme = \g\p\t ]] 00:06:24.922 02:01:49 blockdev_nvme -- bdev/blockdev.sh@40 -- # [[ nvme == xnvme ]] 00:06:24.922 ************************************ 00:06:24.922 END TEST blockdev_nvme 00:06:24.922 ************************************ 00:06:24.922 00:06:24.922 real 0m36.465s 00:06:24.922 user 0m55.832s 00:06:24.922 sys 0m5.189s 00:06:24.922 02:01:49 blockdev_nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:24.922 02:01:49 blockdev_nvme -- common/autotest_common.sh@10 -- # set +x 00:06:25.183 02:01:49 -- spdk/autotest.sh@209 -- # uname -s 00:06:25.183 02:01:49 -- spdk/autotest.sh@209 -- # [[ Linux == Linux ]] 00:06:25.183 02:01:49 -- spdk/autotest.sh@210 -- # run_test blockdev_nvme_gpt /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:25.183 02:01:49 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:25.183 02:01:49 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:25.183 02:01:49 -- common/autotest_common.sh@10 -- # set +x 00:06:25.183 ************************************ 00:06:25.183 START TEST blockdev_nvme_gpt 00:06:25.183 ************************************ 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh gpt 00:06:25.183 * Looking for test storage... 00:06:25.183 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lcov --version 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@333 -- # local ver1 ver1_l 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@334 -- # local ver2 ver2_l 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@336 -- # IFS=.-: 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@336 -- # read -ra ver1 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@337 -- # IFS=.-: 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@337 -- # read -ra ver2 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@338 -- # local 'op=<' 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@340 -- # ver1_l=2 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@341 -- # ver2_l=1 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@344 -- # case "$op" in 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@345 -- # : 1 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v = 0 )) 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@365 -- # decimal 1 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=1 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 1 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@365 -- # ver1[v]=1 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@366 -- # decimal 2 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@353 -- # local d=2 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@355 -- # echo 2 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@366 -- # ver2[v]=2 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:06:25.183 02:01:49 blockdev_nvme_gpt -- scripts/common.sh@368 -- # return 0 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:06:25.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.183 --rc genhtml_branch_coverage=1 00:06:25.183 --rc genhtml_function_coverage=1 00:06:25.183 --rc genhtml_legend=1 00:06:25.183 --rc geninfo_all_blocks=1 00:06:25.183 --rc geninfo_unexecuted_blocks=1 00:06:25.183 00:06:25.183 ' 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:06:25.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.183 --rc genhtml_branch_coverage=1 00:06:25.183 --rc genhtml_function_coverage=1 00:06:25.183 --rc genhtml_legend=1 00:06:25.183 --rc geninfo_all_blocks=1 00:06:25.183 --rc geninfo_unexecuted_blocks=1 00:06:25.183 00:06:25.183 ' 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:06:25.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.183 --rc genhtml_branch_coverage=1 00:06:25.183 --rc genhtml_function_coverage=1 00:06:25.183 --rc genhtml_legend=1 00:06:25.183 --rc geninfo_all_blocks=1 00:06:25.183 --rc geninfo_unexecuted_blocks=1 00:06:25.183 00:06:25.183 ' 00:06:25.183 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:06:25.183 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:06:25.183 --rc genhtml_branch_coverage=1 00:06:25.183 --rc genhtml_function_coverage=1 00:06:25.183 --rc genhtml_legend=1 00:06:25.183 --rc geninfo_all_blocks=1 00:06:25.183 --rc geninfo_unexecuted_blocks=1 00:06:25.183 00:06:25.183 ' 00:06:25.183 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:06:25.183 02:01:49 blockdev_nvme_gpt -- bdev/nbd_common.sh@6 -- # set -e 00:06:25.183 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:06:25.183 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:25.183 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@20 -- # : 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # uname -s 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@719 -- # test_type=gpt 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@720 -- # crypto_device= 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@721 -- # dek= 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@722 -- # env_ctx= 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == bdev ]] 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@727 -- # [[ gpt == crypto_* ]] 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=62512 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@49 -- # waitforlisten 62512 00:06:25.184 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:25.184 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@835 -- # '[' -z 62512 ']' 00:06:25.184 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:25.184 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:25.184 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:25.184 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:25.184 02:01:49 blockdev_nvme_gpt -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:06:25.184 02:01:49 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:25.444 [2024-12-15 02:01:49.969284] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:25.444 [2024-12-15 02:01:49.969686] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid62512 ] 00:06:25.444 [2024-12-15 02:01:50.130579] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:25.705 [2024-12-15 02:01:50.227449] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:26.277 02:01:50 blockdev_nvme_gpt -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:26.277 02:01:50 blockdev_nvme_gpt -- common/autotest_common.sh@868 -- # return 0 00:06:26.277 02:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:06:26.277 02:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@739 -- # setup_gpt_conf 00:06:26.277 02:01:50 blockdev_nvme_gpt -- bdev/blockdev.sh@104 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:06:26.538 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:26.538 Waiting for block devices as requested 00:06:26.538 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:06:26.799 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:06:26.799 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:06:26.799 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:06:32.129 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@105 -- # get_zoned_devs 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n1/queue/zoned ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n2 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n2 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n2/queue/zoned ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2n3 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme2n3 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2n3/queue/zoned ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3c3n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1650 -- # local device=nvme3c3n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3c3n1/queue/zoned ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # nvme_devs=('/sys/block/nvme0n1' '/sys/block/nvme1n1' '/sys/block/nvme2n1' '/sys/block/nvme2n2' '/sys/block/nvme2n3' '/sys/block/nvme3n1') 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@106 -- # local nvme_devs nvme_dev 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@107 -- # gpt_nvme= 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@109 -- # for nvme_dev in "${nvme_devs[@]}" 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@110 -- # [[ -z '' ]] 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@111 -- # dev=/dev/nvme0n1 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # parted /dev/nvme0n1 -ms print 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@112 -- # pt='Error: /dev/nvme0n1: unrecognised disk label 00:06:32.129 BYT; 00:06:32.129 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:;' 00:06:32.129 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@113 -- # [[ Error: /dev/nvme0n1: unrecognised disk label 00:06:32.129 BYT; 00:06:32.129 /dev/nvme0n1:5369MB:nvme:4096:4096:unknown:QEMU NVMe Ctrl:; == *\/\d\e\v\/\n\v\m\e\0\n\1\:\ \u\n\r\e\c\o\g\n\i\s\e\d\ \d\i\s\k\ \l\a\b\e\l* ]] 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@114 -- # gpt_nvme=/dev/nvme0n1 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@115 -- # break 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@118 -- # [[ -n /dev/nvme0n1 ]] 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@123 -- # typeset -g g_unique_partguid=6f89f330-603b-4116-ac73-2ca8eae53030 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@124 -- # typeset -g g_unique_partguid_old=abf1734f-66e5-4c0f-aa29-4021d4d307df 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@127 -- # parted -s /dev/nvme0n1 mklabel gpt mkpart SPDK_TEST_first 0% 50% mkpart SPDK_TEST_second 50% 100% 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # get_spdk_gpt_old 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@411 -- # local spdk_guid 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@413 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@415 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # IFS='()' 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # read -r _ spdk_guid _ 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@416 -- # grep -w SPDK_GPT_PART_TYPE_GUID_OLD /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=0x7c5222bd-0x8f5d-0x4087-0x9c00-0xbf9843c7b58c 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@417 -- # spdk_guid=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@419 -- # echo 7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@129 -- # SPDK_GPT_OLD_GUID=7c5222bd-8f5d-4087-9c00-bf9843c7b58c 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # get_spdk_gpt 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@423 -- # local spdk_guid 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@425 -- # [[ -e /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h ]] 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@427 -- # GPT_H=/home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # IFS='()' 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # read -r _ spdk_guid _ 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@428 -- # grep -w SPDK_GPT_PART_TYPE_GUID /home/vagrant/spdk_repo/spdk/module/bdev/gpt/gpt.h 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=0x6527994e-0x2c5a-0x4eec-0x9613-0x8f5944074e8b 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@429 -- # spdk_guid=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:32.130 02:01:56 blockdev_nvme_gpt -- scripts/common.sh@431 -- # echo 6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@130 -- # SPDK_GPT_GUID=6527994e-2c5a-4eec-9613-8f5944074e8b 00:06:32.130 02:01:56 blockdev_nvme_gpt -- bdev/blockdev.sh@131 -- # sgdisk -t 1:6527994e-2c5a-4eec-9613-8f5944074e8b -u 1:6f89f330-603b-4116-ac73-2ca8eae53030 /dev/nvme0n1 00:06:33.063 The operation has completed successfully. 00:06:33.063 02:01:57 blockdev_nvme_gpt -- bdev/blockdev.sh@132 -- # sgdisk -t 2:7c5222bd-8f5d-4087-9c00-bf9843c7b58c -u 2:abf1734f-66e5-4c0f-aa29-4021d4d307df /dev/nvme0n1 00:06:33.996 The operation has completed successfully. 00:06:33.996 02:01:58 blockdev_nvme_gpt -- bdev/blockdev.sh@133 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:06:34.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:06:34.819 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.078 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.078 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.078 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:06:35.078 02:01:59 blockdev_nvme_gpt -- bdev/blockdev.sh@134 -- # rpc_cmd bdev_get_bdevs 00:06:35.078 02:01:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.078 02:01:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.078 [] 00:06:35.078 02:01:59 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.078 02:01:59 blockdev_nvme_gpt -- bdev/blockdev.sh@135 -- # setup_nvme_conf 00:06:35.078 02:01:59 blockdev_nvme_gpt -- bdev/blockdev.sh@81 -- # local json 00:06:35.078 02:01:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # mapfile -t json 00:06:35.078 02:01:59 blockdev_nvme_gpt -- bdev/blockdev.sh@82 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:06:35.078 02:01:59 blockdev_nvme_gpt -- bdev/blockdev.sh@83 -- # rpc_cmd load_subsystem_config -j ''\''{ "subsystem": "bdev", "config": [ { "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme0", "traddr":"0000:00:10.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme1", "traddr":"0000:00:11.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme2", "traddr":"0000:00:12.0" } },{ "method": "bdev_nvme_attach_controller", "params": { "trtype": "PCIe", "name":"Nvme3", "traddr":"0000:00:13.0" } } ] }'\''' 00:06:35.078 02:01:59 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.078 02:01:59 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.336 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.336 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # cat 00:06:35.336 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.336 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.336 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.336 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@563 -- # xtrace_disable 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # jq -r .name 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "Nvme0n1",' ' "aliases": [' ' "7e161f94-c029-48e9-94e7-2ed0fe3aff97"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "7e161f94-c029-48e9-94e7-2ed0fe3aff97",' ' "numa_id": -1,' ' "md_size": 64,' ' "md_interleave": false,' ' "dif_type": 0,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": true,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:10.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:10.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12340",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12340",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme1n1p1",' ' "aliases": [' ' "6f89f330-603b-4116-ac73-2ca8eae53030"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655104,' ' "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 256,' ' "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b",' ' "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030",' ' "partition_name": "SPDK_TEST_first"' ' }' ' }' '}' '{' ' "name": "Nvme1n1p2",' ' "aliases": [' ' "abf1734f-66e5-4c0f-aa29-4021d4d307df"' ' ],' ' "product_name": "GPT Disk",' ' "block_size": 4096,' ' "num_blocks": 655103,' ' "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "gpt": {' ' "base_bdev": "Nvme1n1",' ' "offset_blocks": 655360,' ' "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c",' ' "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df",' ' "partition_name": "SPDK_TEST_second"' ' }' ' }' '}' '{' ' "name": "Nvme2n1",' ' "aliases": [' ' "95cde685-f8bf-48df-9484-abd53427775c"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "95cde685-f8bf-48df-9484-abd53427775c",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n2",' ' "aliases": [' ' "ffc933c6-1099-4600-92bd-558b0dd7fb99"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "ffc933c6-1099-4600-92bd-558b0dd7fb99",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 2,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme2n3",' ' "aliases": [' ' "d0a719b6-b17a-4fb7-8657-f00155ea07d6"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d0a719b6-b17a-4fb7-8657-f00155ea07d6",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:12.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:12.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12342",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:12342",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": false,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 3,' ' "can_share": false' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' '{' ' "name": "Nvme3n1",' ' "aliases": [' ' "3eee5230-c604-4bd4-b986-58bd203d5a1a"' ' ],' ' "product_name": "NVMe disk",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "3eee5230-c604-4bd4-b986-58bd203d5a1a",' ' "numa_id": -1,' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": true,' ' "flush": true,' ' "reset": true,' ' "nvme_admin": true,' ' "nvme_io": true,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": true,' ' "compare_and_write": false,' ' "abort": true,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": true,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {' ' "nvme": [' ' {' ' "pci_address": "0000:00:13.0",' ' "trid": {' ' "trtype": "PCIe",' ' "traddr": "0000:00:13.0"' ' },' ' "ctrlr_data": {' ' "cntlid": 0,' ' "vendor_id": "0x1b36",' ' "model_number": "QEMU NVMe Ctrl",' ' "serial_number": "12343",' ' "firmware_revision": "8.0.0",' ' "subnqn": "nqn.2019-08.org.qemu:fdp-subsys3",' ' "oacs": {' ' "security": 0,' ' "format": 1,' ' "firmware": 0,' ' "ns_manage": 1' ' },' ' "multi_ctrlr": true,' ' "ana_reporting": false' ' },' ' "vs": {' ' "nvme_version": "1.4"' ' },' ' "ns_data": {' ' "id": 1,' ' "can_share": true' ' }' ' }' ' ],' ' "mp_policy": "active_passive"' ' }' '}' 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@789 -- # hello_world_bdev=Nvme0n1 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:06:35.595 02:02:00 blockdev_nvme_gpt -- bdev/blockdev.sh@791 -- # killprocess 62512 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@954 -- # '[' -z 62512 ']' 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@958 -- # kill -0 62512 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # uname 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 62512 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:35.595 killing process with pid 62512 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@972 -- # echo 'killing process with pid 62512' 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@973 -- # kill 62512 00:06:35.595 02:02:00 blockdev_nvme_gpt -- common/autotest_common.sh@978 -- # wait 62512 00:06:36.969 02:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:36.969 02:02:01 blockdev_nvme_gpt -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:36.969 02:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:06:36.969 02:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:36.969 02:02:01 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:36.969 ************************************ 00:06:36.969 START TEST bdev_hello_world 00:06:36.969 ************************************ 00:06:36.969 02:02:01 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b Nvme0n1 '' 00:06:36.969 [2024-12-15 02:02:01.671003] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:36.969 [2024-12-15 02:02:01.671118] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63134 ] 00:06:37.228 [2024-12-15 02:02:01.825896] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:37.228 [2024-12-15 02:02:01.901692] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:37.793 [2024-12-15 02:02:02.399786] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:06:37.793 [2024-12-15 02:02:02.399823] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev Nvme0n1 00:06:37.793 [2024-12-15 02:02:02.399840] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:06:37.793 [2024-12-15 02:02:02.401834] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:06:37.793 [2024-12-15 02:02:02.402366] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:06:37.793 [2024-12-15 02:02:02.402388] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:06:37.793 [2024-12-15 02:02:02.402616] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:06:37.793 00:06:37.793 [2024-12-15 02:02:02.402634] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:06:38.358 00:06:38.358 real 0m1.353s 00:06:38.358 user 0m1.090s 00:06:38.358 sys 0m0.158s 00:06:38.358 02:02:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:38.358 02:02:02 blockdev_nvme_gpt.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:06:38.358 ************************************ 00:06:38.358 END TEST bdev_hello_world 00:06:38.358 ************************************ 00:06:38.358 02:02:02 blockdev_nvme_gpt -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:06:38.358 02:02:02 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:06:38.358 02:02:02 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:38.358 02:02:02 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:38.358 ************************************ 00:06:38.358 START TEST bdev_bounds 00:06:38.358 ************************************ 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=63165 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:06:38.358 Process bdevio pid: 63165 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 63165' 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 63165 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 63165 ']' 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:38.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:38.358 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:38.358 [2024-12-15 02:02:03.060963] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:38.358 [2024-12-15 02:02:03.061074] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63165 ] 00:06:38.625 [2024-12-15 02:02:03.219711] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:06:38.625 [2024-12-15 02:02:03.318012] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:38.625 [2024-12-15 02:02:03.318608] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:38.625 [2024-12-15 02:02:03.318625] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:06:39.209 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:39.209 02:02:03 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:06:39.209 02:02:03 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:06:39.467 I/O targets: 00:06:39.467 Nvme0n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:06:39.467 Nvme1n1p1: 655104 blocks of 4096 bytes (2559 MiB) 00:06:39.467 Nvme1n1p2: 655103 blocks of 4096 bytes (2559 MiB) 00:06:39.467 Nvme2n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.467 Nvme2n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.467 Nvme2n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:06:39.467 Nvme3n1: 262144 blocks of 4096 bytes (1024 MiB) 00:06:39.467 00:06:39.467 00:06:39.467 CUnit - A unit testing framework for C - Version 2.1-3 00:06:39.467 http://cunit.sourceforge.net/ 00:06:39.467 00:06:39.467 00:06:39.467 Suite: bdevio tests on: Nvme3n1 00:06:39.467 Test: blockdev write read block ...passed 00:06:39.467 Test: blockdev write zeroes read block ...passed 00:06:39.467 Test: blockdev write zeroes read no split ...passed 00:06:39.467 Test: blockdev write zeroes read split ...passed 00:06:39.467 Test: blockdev write zeroes read split partial ...passed 00:06:39.467 Test: blockdev reset ...[2024-12-15 02:02:04.038458] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:13.0, 0] resetting controller 00:06:39.467 [2024-12-15 02:02:04.041412] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:13.0, 0] Resetting controller successful. 00:06:39.467 passed 00:06:39.467 Test: blockdev write read 8 blocks ...passed 00:06:39.467 Test: blockdev write read size > 128k ...passed 00:06:39.467 Test: blockdev write read invalid size ...passed 00:06:39.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.467 Test: blockdev write read max offset ...passed 00:06:39.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.467 Test: blockdev writev readv 8 blocks ...passed 00:06:39.467 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.467 Test: blockdev writev readv block ...passed 00:06:39.467 Test: blockdev writev readv size > 128k ...passed 00:06:39.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.467 Test: blockdev comparev and writev ...[2024-12-15 02:02:04.048184] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9004000 len:0x1000 00:06:39.467 [2024-12-15 02:02:04.048246] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.467 passed 00:06:39.467 Test: blockdev nvme passthru rw ...passed 00:06:39.467 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:02:04.048976] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.467 [2024-12-15 02:02:04.049017] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.467 passed 00:06:39.467 Test: blockdev nvme admin passthru ...passed 00:06:39.467 Test: blockdev copy ...passed 00:06:39.467 Suite: bdevio tests on: Nvme2n3 00:06:39.467 Test: blockdev write read block ...passed 00:06:39.467 Test: blockdev write zeroes read block ...passed 00:06:39.467 Test: blockdev write zeroes read no split ...passed 00:06:39.467 Test: blockdev write zeroes read split ...passed 00:06:39.467 Test: blockdev write zeroes read split partial ...passed 00:06:39.467 Test: blockdev reset ...[2024-12-15 02:02:04.105456] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.467 [2024-12-15 02:02:04.108472] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.467 passed 00:06:39.467 Test: blockdev write read 8 blocks ...passed 00:06:39.467 Test: blockdev write read size > 128k ...passed 00:06:39.467 Test: blockdev write read invalid size ...passed 00:06:39.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.467 Test: blockdev write read max offset ...passed 00:06:39.467 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.467 Test: blockdev writev readv 8 blocks ...passed 00:06:39.467 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.467 Test: blockdev writev readv block ...passed 00:06:39.467 Test: blockdev writev readv size > 128k ...passed 00:06:39.467 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.467 Test: blockdev comparev and writev ...[2024-12-15 02:02:04.115140] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:3 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2b9002000 len:0x1000 00:06:39.467 [2024-12-15 02:02:04.115179] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.467 passed 00:06:39.467 Test: blockdev nvme passthru rw ...passed 00:06:39.467 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.467 Test: blockdev nvme admin passthru ...[2024-12-15 02:02:04.115691] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.467 [2024-12-15 02:02:04.115713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.467 passed 00:06:39.467 Test: blockdev copy ...passed 00:06:39.467 Suite: bdevio tests on: Nvme2n2 00:06:39.467 Test: blockdev write read block ...passed 00:06:39.467 Test: blockdev write zeroes read block ...passed 00:06:39.467 Test: blockdev write zeroes read no split ...passed 00:06:39.467 Test: blockdev write zeroes read split ...passed 00:06:39.467 Test: blockdev write zeroes read split partial ...passed 00:06:39.467 Test: blockdev reset ...[2024-12-15 02:02:04.172999] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.467 [2024-12-15 02:02:04.175858] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.467 passed 00:06:39.467 Test: blockdev write read 8 blocks ...passed 00:06:39.467 Test: blockdev write read size > 128k ...passed 00:06:39.467 Test: blockdev write read invalid size ...passed 00:06:39.467 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.467 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.468 Test: blockdev write read max offset ...passed 00:06:39.468 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.468 Test: blockdev writev readv 8 blocks ...passed 00:06:39.468 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.468 Test: blockdev writev readv block ...passed 00:06:39.468 Test: blockdev writev readv size > 128k ...passed 00:06:39.468 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.468 Test: blockdev comparev and writev ...[2024-12-15 02:02:04.182467] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:2 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfc38000 len:0x1000 00:06:39.468 [2024-12-15 02:02:04.182503] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.468 passed 00:06:39.468 Test: blockdev nvme passthru rw ...passed 00:06:39.468 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:02:04.183141] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC RESERVED / VENDOR SPECIFIC qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.468 [2024-12-15 02:02:04.183166] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.468 passed 00:06:39.468 Test: blockdev nvme admin passthru ...passed 00:06:39.468 Test: blockdev copy ...passed 00:06:39.468 Suite: bdevio tests on: Nvme2n1 00:06:39.468 Test: blockdev write read block ...passed 00:06:39.468 Test: blockdev write zeroes read block ...passed 00:06:39.468 Test: blockdev write zeroes read no split ...passed 00:06:39.468 Test: blockdev write zeroes read split ...passed 00:06:39.726 Test: blockdev write zeroes read split partial ...passed 00:06:39.726 Test: blockdev reset ...[2024-12-15 02:02:04.237684] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:12.0, 0] resetting controller 00:06:39.726 [2024-12-15 02:02:04.240978] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:12.0, 0] Resetting controller successful. 00:06:39.726 passed 00:06:39.726 Test: blockdev write read 8 blocks ...passed 00:06:39.726 Test: blockdev write read size > 128k ...passed 00:06:39.726 Test: blockdev write read invalid size ...passed 00:06:39.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.726 Test: blockdev write read max offset ...passed 00:06:39.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.726 Test: blockdev writev readv 8 blocks ...passed 00:06:39.726 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.726 Test: blockdev writev readv block ...passed 00:06:39.726 Test: blockdev writev readv size > 128k ...passed 00:06:39.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.726 Test: blockdev comparev and writev ...[2024-12-15 02:02:04.247787] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:0 len:1 SGL DATA BLOCK ADDRESS 0x2dfc34000 len:0x1000 00:06:39.726 [2024-12-15 02:02:04.247826] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.726 passed 00:06:39.726 Test: blockdev nvme passthru rw ...passed 00:06:39.726 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:02:04.248588] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:190 PRP1 0x0 PRP2 0x0 00:06:39.726 [2024-12-15 02:02:04.248610] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:190 cdw0:0 sqhd:001c p:1 m:0 dnr:1 00:06:39.726 passed 00:06:39.726 Test: blockdev nvme admin passthru ...passed 00:06:39.726 Test: blockdev copy ...passed 00:06:39.726 Suite: bdevio tests on: Nvme1n1p2 00:06:39.726 Test: blockdev write read block ...passed 00:06:39.726 Test: blockdev write zeroes read block ...passed 00:06:39.726 Test: blockdev write zeroes read no split ...passed 00:06:39.726 Test: blockdev write zeroes read split ...passed 00:06:39.726 Test: blockdev write zeroes read split partial ...passed 00:06:39.726 Test: blockdev reset ...[2024-12-15 02:02:04.307767] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:39.726 [2024-12-15 02:02:04.310439] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:39.726 passed 00:06:39.726 Test: blockdev write read 8 blocks ...passed 00:06:39.726 Test: blockdev write read size > 128k ...passed 00:06:39.726 Test: blockdev write read invalid size ...passed 00:06:39.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.726 Test: blockdev write read max offset ...passed 00:06:39.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.726 Test: blockdev writev readv 8 blocks ...passed 00:06:39.726 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.726 Test: blockdev writev readv block ...passed 00:06:39.726 Test: blockdev writev readv size > 128k ...passed 00:06:39.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.726 Test: blockdev comparev and writev ...[2024-12-15 02:02:04.317246] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:655360 len:1 SGL DATA BLOCK ADDRESS 0x2dfc30000 len:0x1000 00:06:39.726 [2024-12-15 02:02:04.317285] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.726 passed 00:06:39.726 Test: blockdev nvme passthru rw ...passed 00:06:39.726 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.726 Test: blockdev nvme admin passthru ...passed 00:06:39.726 Test: blockdev copy ...passed 00:06:39.726 Suite: bdevio tests on: Nvme1n1p1 00:06:39.726 Test: blockdev write read block ...passed 00:06:39.726 Test: blockdev write zeroes read block ...passed 00:06:39.726 Test: blockdev write zeroes read no split ...passed 00:06:39.726 Test: blockdev write zeroes read split ...passed 00:06:39.726 Test: blockdev write zeroes read split partial ...passed 00:06:39.726 Test: blockdev reset ...[2024-12-15 02:02:04.362007] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:11.0, 0] resetting controller 00:06:39.726 [2024-12-15 02:02:04.365097] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:11.0, 0] Resetting controller successful. 00:06:39.726 passed 00:06:39.726 Test: blockdev write read 8 blocks ...passed 00:06:39.726 Test: blockdev write read size > 128k ...passed 00:06:39.726 Test: blockdev write read invalid size ...passed 00:06:39.726 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.726 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.726 Test: blockdev write read max offset ...passed 00:06:39.726 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.726 Test: blockdev writev readv 8 blocks ...passed 00:06:39.726 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.726 Test: blockdev writev readv block ...passed 00:06:39.726 Test: blockdev writev readv size > 128k ...passed 00:06:39.726 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.726 Test: blockdev comparev and writev ...[2024-12-15 02:02:04.371591] nvme_qpair.c: 243:nvme_io_qpair_print_command: *NOTICE*: COMPARE sqid:1 cid:190 nsid:1 lba:256 len:1 SGL DATA BLOCK ADDRESS 0x2b9a0e000 len:0x1000 00:06:39.726 [2024-12-15 02:02:04.371625] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: COMPARE FAILURE (02/85) qid:1 cid:190 cdw0:0 sqhd:0018 p:1 m:0 dnr:1 00:06:39.726 passed 00:06:39.726 Test: blockdev nvme passthru rw ...passed 00:06:39.726 Test: blockdev nvme passthru vendor specific ...passed 00:06:39.726 Test: blockdev nvme admin passthru ...passed 00:06:39.726 Test: blockdev copy ...passed 00:06:39.726 Suite: bdevio tests on: Nvme0n1 00:06:39.726 Test: blockdev write read block ...passed 00:06:39.726 Test: blockdev write zeroes read block ...passed 00:06:39.727 Test: blockdev write zeroes read no split ...passed 00:06:39.727 Test: blockdev write zeroes read split ...passed 00:06:39.727 Test: blockdev write zeroes read split partial ...passed 00:06:39.727 Test: blockdev reset ...[2024-12-15 02:02:04.417253] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:06:39.727 [2024-12-15 02:02:04.419949] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:06:39.727 passed 00:06:39.727 Test: blockdev write read 8 blocks ...passed 00:06:39.727 Test: blockdev write read size > 128k ...passed 00:06:39.727 Test: blockdev write read invalid size ...passed 00:06:39.727 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:06:39.727 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:06:39.727 Test: blockdev write read max offset ...passed 00:06:39.727 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:06:39.727 Test: blockdev writev readv 8 blocks ...passed 00:06:39.727 Test: blockdev writev readv 30 x 1block ...passed 00:06:39.727 Test: blockdev writev readv block ...passed 00:06:39.727 Test: blockdev writev readv size > 128k ...passed 00:06:39.727 Test: blockdev writev readv size > 128k in two iovs ...passed 00:06:39.727 Test: blockdev comparev and writev ...[2024-12-15 02:02:04.426506] bdevio.c: 727:blockdev_comparev_and_writev: *ERROR*: skipping comparev_and_writev on bdev Nvme0n1 since it has 00:06:39.727 separate metadata which is not supported yet. 00:06:39.727 passed 00:06:39.727 Test: blockdev nvme passthru rw ...passed 00:06:39.727 Test: blockdev nvme passthru vendor specific ...[2024-12-15 02:02:04.427325] nvme_qpair.c: 218:nvme_admin_qpair_print_command: *NOTICE*: FABRIC CONNECT qid:1 cid:191 PRP1 0x0 PRP2 0x0 00:06:39.727 [2024-12-15 02:02:04.427451] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:1 cid:191 cdw0:0 sqhd:0017 p:1 m:0 dnr:1 00:06:39.727 passed 00:06:39.727 Test: blockdev nvme admin passthru ...passed 00:06:39.727 Test: blockdev copy ...passed 00:06:39.727 00:06:39.727 Run Summary: Type Total Ran Passed Failed Inactive 00:06:39.727 suites 7 7 n/a 0 0 00:06:39.727 tests 161 161 161 0 0 00:06:39.727 asserts 1025 1025 1025 0 n/a 00:06:39.727 00:06:39.727 Elapsed time = 1.168 seconds 00:06:39.727 0 00:06:39.727 02:02:04 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 63165 00:06:39.727 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 63165 ']' 00:06:39.727 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 63165 00:06:39.727 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:06:39.727 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:39.727 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63165 00:06:39.987 killing process with pid 63165 00:06:39.987 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:39.987 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:39.987 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63165' 00:06:39.987 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@973 -- # kill 63165 00:06:39.987 02:02:04 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@978 -- # wait 63165 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:06:40.561 00:06:40.561 real 0m2.146s 00:06:40.561 user 0m5.511s 00:06:40.561 sys 0m0.276s 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:06:40.561 ************************************ 00:06:40.561 END TEST bdev_bounds 00:06:40.561 ************************************ 00:06:40.561 02:02:05 blockdev_nvme_gpt -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:40.561 02:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:06:40.561 02:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:40.561 02:02:05 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:40.561 ************************************ 00:06:40.561 START TEST bdev_nbd 00:06:40.561 ************************************ 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '' 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=7 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=7 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=63225 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 63225 /var/tmp/spdk-nbd.sock 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 63225 ']' 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:06:40.561 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:06:40.561 02:02:05 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:40.561 [2024-12-15 02:02:05.271924] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:40.561 [2024-12-15 02:02:05.272372] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:06:40.823 [2024-12-15 02:02:05.429729] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:06:40.823 [2024-12-15 02:02:05.509371] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:41.389 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.648 1+0 records in 00:06:41.648 1+0 records out 00:06:41.648 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000484335 s, 8.5 MB/s 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:41.648 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:41.907 1+0 records in 00:06:41.907 1+0 records out 00:06:41.907 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000354276 s, 11.6 MB/s 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:41.907 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.165 1+0 records in 00:06:42.165 1+0 records out 00:06:42.165 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000521012 s, 7.9 MB/s 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.165 02:02:06 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.423 1+0 records in 00:06:42.423 1+0 records out 00:06:42.423 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000589835 s, 6.9 MB/s 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.423 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.681 1+0 records in 00:06:42.681 1+0 records out 00:06:42.681 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000345227 s, 11.9 MB/s 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.681 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.682 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.682 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:42.940 1+0 records in 00:06:42.940 1+0 records out 00:06:42.940 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536476 s, 7.6 MB/s 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:42.940 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd6 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd6 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd6 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd6 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd6 /proc/partitions 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd6 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:43.199 1+0 records in 00:06:43.199 1+0 records out 00:06:43.199 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000440148 s, 9.3 MB/s 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 7 )) 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd0", 00:06:43.199 "bdev_name": "Nvme0n1" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd1", 00:06:43.199 "bdev_name": "Nvme1n1p1" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd2", 00:06:43.199 "bdev_name": "Nvme1n1p2" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd3", 00:06:43.199 "bdev_name": "Nvme2n1" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd4", 00:06:43.199 "bdev_name": "Nvme2n2" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd5", 00:06:43.199 "bdev_name": "Nvme2n3" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd6", 00:06:43.199 "bdev_name": "Nvme3n1" 00:06:43.199 } 00:06:43.199 ]' 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd0", 00:06:43.199 "bdev_name": "Nvme0n1" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd1", 00:06:43.199 "bdev_name": "Nvme1n1p1" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd2", 00:06:43.199 "bdev_name": "Nvme1n1p2" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd3", 00:06:43.199 "bdev_name": "Nvme2n1" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd4", 00:06:43.199 "bdev_name": "Nvme2n2" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd5", 00:06:43.199 "bdev_name": "Nvme2n3" 00:06:43.199 }, 00:06:43.199 { 00:06:43.199 "nbd_device": "/dev/nbd6", 00:06:43.199 "bdev_name": "Nvme3n1" 00:06:43.199 } 00:06:43.199 ]' 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5 /dev/nbd6' 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6') 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.199 02:02:07 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.458 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.717 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:43.975 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.234 02:02:08 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:44.493 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd6 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd6 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd6 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd6 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd6 /proc/partitions 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:44.751 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'Nvme0n1 Nvme1n1p1 Nvme1n1p2 Nvme2n1 Nvme2n2 Nvme2n3 Nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('Nvme0n1' 'Nvme1n1p1' 'Nvme1n1p2' 'Nvme2n1' 'Nvme2n2' 'Nvme2n3' 'Nvme3n1') 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:45.010 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme0n1 /dev/nbd0 00:06:45.269 /dev/nbd0 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.269 1+0 records in 00:06:45.269 1+0 records out 00:06:45.269 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000416006 s, 9.8 MB/s 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:45.269 02:02:09 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p1 /dev/nbd1 00:06:45.527 /dev/nbd1 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.527 1+0 records in 00:06:45.527 1+0 records out 00:06:45.527 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000322958 s, 12.7 MB/s 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.527 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.528 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:45.528 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme1n1p2 /dev/nbd10 00:06:45.786 /dev/nbd10 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:45.786 1+0 records in 00:06:45.786 1+0 records out 00:06:45.786 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000498598 s, 8.2 MB/s 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:45.786 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n1 /dev/nbd11 00:06:46.044 /dev/nbd11 00:06:46.044 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:06:46.044 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:06:46.044 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:06:46.044 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.044 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.044 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.045 1+0 records in 00:06:46.045 1+0 records out 00:06:46.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000536278 s, 7.6 MB/s 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n2 /dev/nbd12 00:06:46.045 /dev/nbd12 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.045 1+0 records in 00:06:46.045 1+0 records out 00:06:46.045 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000566922 s, 7.2 MB/s 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.045 02:02:10 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme2n3 /dev/nbd13 00:06:46.303 /dev/nbd13 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.303 1+0 records in 00:06:46.303 1+0 records out 00:06:46.303 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000350763 s, 11.7 MB/s 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.303 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk Nvme3n1 /dev/nbd14 00:06:46.561 /dev/nbd14 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd14 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd14 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd14 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd14 /proc/partitions 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd14 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:06:46.561 1+0 records in 00:06:46.561 1+0 records out 00:06:46.561 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000431154 s, 9.5 MB/s 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:06:46.561 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 7 )) 00:06:46.562 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:46.562 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:46.562 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd0", 00:06:46.820 "bdev_name": "Nvme0n1" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd1", 00:06:46.820 "bdev_name": "Nvme1n1p1" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd10", 00:06:46.820 "bdev_name": "Nvme1n1p2" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd11", 00:06:46.820 "bdev_name": "Nvme2n1" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd12", 00:06:46.820 "bdev_name": "Nvme2n2" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd13", 00:06:46.820 "bdev_name": "Nvme2n3" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd14", 00:06:46.820 "bdev_name": "Nvme3n1" 00:06:46.820 } 00:06:46.820 ]' 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd0", 00:06:46.820 "bdev_name": "Nvme0n1" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd1", 00:06:46.820 "bdev_name": "Nvme1n1p1" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd10", 00:06:46.820 "bdev_name": "Nvme1n1p2" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd11", 00:06:46.820 "bdev_name": "Nvme2n1" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd12", 00:06:46.820 "bdev_name": "Nvme2n2" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd13", 00:06:46.820 "bdev_name": "Nvme2n3" 00:06:46.820 }, 00:06:46.820 { 00:06:46.820 "nbd_device": "/dev/nbd14", 00:06:46.820 "bdev_name": "Nvme3n1" 00:06:46.820 } 00:06:46.820 ]' 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:06:46.820 /dev/nbd1 00:06:46.820 /dev/nbd10 00:06:46.820 /dev/nbd11 00:06:46.820 /dev/nbd12 00:06:46.820 /dev/nbd13 00:06:46.820 /dev/nbd14' 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:06:46.820 /dev/nbd1 00:06:46.820 /dev/nbd10 00:06:46.820 /dev/nbd11 00:06:46.820 /dev/nbd12 00:06:46.820 /dev/nbd13 00:06:46.820 /dev/nbd14' 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=7 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 7 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=7 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 7 -ne 7 ']' 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' write 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:06:46.820 256+0 records in 00:06:46.820 256+0 records out 00:06:46.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0122134 s, 85.9 MB/s 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:06:46.820 256+0 records in 00:06:46.820 256+0 records out 00:06:46.820 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0611846 s, 17.1 MB/s 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:46.820 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:06:47.078 256+0 records in 00:06:47.078 256+0 records out 00:06:47.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0606093 s, 17.3 MB/s 00:06:47.078 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.078 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:06:47.078 256+0 records in 00:06:47.078 256+0 records out 00:06:47.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0606531 s, 17.3 MB/s 00:06:47.078 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.078 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:06:47.078 256+0 records in 00:06:47.078 256+0 records out 00:06:47.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0641729 s, 16.3 MB/s 00:06:47.078 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.078 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:06:47.078 256+0 records in 00:06:47.078 256+0 records out 00:06:47.078 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0609974 s, 17.2 MB/s 00:06:47.078 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.078 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:06:47.338 256+0 records in 00:06:47.338 256+0 records out 00:06:47.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.0597313 s, 17.6 MB/s 00:06:47.338 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:06:47.338 02:02:11 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd14 bs=4096 count=256 oflag=direct 00:06:47.338 256+0 records in 00:06:47.338 256+0 records out 00:06:47.338 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.129521 s, 8.1 MB/s 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' verify 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.338 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd14 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13 /dev/nbd14' 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14') 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.339 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.598 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:47.856 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.115 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.373 02:02:12 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:06:48.631 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:48.632 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd14 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd14 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd14 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd14 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd14 /proc/partitions 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:48.892 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:06:49.151 02:02:13 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:06:49.410 malloc_lvol_verify 00:06:49.410 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:06:49.668 0292e41d-db5b-416e-b2ab-cb69c71321c9 00:06:49.668 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:06:49.926 798b3156-898c-477c-a7ea-9784de38ed74 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:06:49.926 /dev/nbd0 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:06:49.926 mke2fs 1.47.0 (5-Feb-2023) 00:06:49.926 Discarding device blocks: 0/4096 done 00:06:49.926 Creating filesystem with 4096 1k blocks and 1024 inodes 00:06:49.926 00:06:49.926 Allocating group tables: 0/1 done 00:06:49.926 Writing inode tables: 0/1 done 00:06:49.926 Creating journal (1024 blocks): done 00:06:49.926 Writing superblocks and filesystem accounting information: 0/1 done 00:06:49.926 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:06:49.926 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 63225 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 63225 ']' 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 63225 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63225 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63225' 00:06:50.185 killing process with pid 63225 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@973 -- # kill 63225 00:06:50.185 02:02:14 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@978 -- # wait 63225 00:06:51.124 02:02:15 blockdev_nvme_gpt.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:06:51.124 00:06:51.124 real 0m10.479s 00:06:51.124 user 0m15.137s 00:06:51.124 sys 0m3.289s 00:06:51.124 ************************************ 00:06:51.124 END TEST bdev_nbd 00:06:51.124 ************************************ 00:06:51.124 02:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:51.124 02:02:15 blockdev_nvme_gpt.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:06:51.124 02:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:06:51.124 02:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = nvme ']' 00:06:51.124 02:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@801 -- # '[' gpt = gpt ']' 00:06:51.124 skipping fio tests on NVMe due to multi-ns failures. 00:06:51.124 02:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@803 -- # echo 'skipping fio tests on NVMe due to multi-ns failures.' 00:06:51.124 02:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:06:51.124 02:02:15 blockdev_nvme_gpt -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:51.124 02:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:51.124 02:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:51.124 02:02:15 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:51.124 ************************************ 00:06:51.124 START TEST bdev_verify 00:06:51.124 ************************************ 00:06:51.124 02:02:15 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:06:51.124 [2024-12-15 02:02:15.802419] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:51.124 [2024-12-15 02:02:15.802534] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63634 ] 00:06:51.383 [2024-12-15 02:02:15.962057] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:51.383 [2024-12-15 02:02:16.060802] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:51.383 [2024-12-15 02:02:16.060878] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:51.952 Running I/O for 5 seconds... 00:06:54.279 19328.00 IOPS, 75.50 MiB/s [2024-12-15T02:02:19.990Z] 19328.00 IOPS, 75.50 MiB/s [2024-12-15T02:02:20.930Z] 20373.33 IOPS, 79.58 MiB/s [2024-12-15T02:02:21.873Z] 20448.00 IOPS, 79.88 MiB/s [2024-12-15T02:02:21.873Z] 20185.60 IOPS, 78.85 MiB/s 00:06:57.108 Latency(us) 00:06:57.108 [2024-12-15T02:02:21.873Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:06:57.108 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x0 length 0xbd0bd 00:06:57.108 Nvme0n1 : 5.05 1419.11 5.54 0.00 0.00 89927.02 19257.50 95581.74 00:06:57.108 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:06:57.108 Nvme0n1 : 5.07 1414.84 5.53 0.00 0.00 90189.37 19156.68 91548.75 00:06:57.108 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x0 length 0x4ff80 00:06:57.108 Nvme1n1p1 : 5.05 1418.68 5.54 0.00 0.00 89779.38 22080.59 86305.87 00:06:57.108 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x4ff80 length 0x4ff80 00:06:57.108 Nvme1n1p1 : 5.07 1414.43 5.53 0.00 0.00 90036.00 21374.82 83482.78 00:06:57.108 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x0 length 0x4ff7f 00:06:57.108 Nvme1n1p2 : 5.05 1418.22 5.54 0.00 0.00 89672.26 22685.54 78643.20 00:06:57.108 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x4ff7f length 0x4ff7f 00:06:57.108 Nvme1n1p2 : 5.07 1413.99 5.52 0.00 0.00 89815.03 24399.56 72593.72 00:06:57.108 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x0 length 0x80000 00:06:57.108 Nvme2n1 : 5.06 1417.81 5.54 0.00 0.00 89547.57 22181.42 75820.11 00:06:57.108 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x80000 length 0x80000 00:06:57.108 Nvme2n1 : 5.07 1413.60 5.52 0.00 0.00 89662.07 24500.38 69770.63 00:06:57.108 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x0 length 0x80000 00:06:57.108 Nvme2n2 : 5.07 1425.56 5.57 0.00 0.00 88826.39 3049.94 71383.83 00:06:57.108 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x80000 length 0x80000 00:06:57.108 Nvme2n2 : 5.07 1413.23 5.52 0.00 0.00 89457.63 23996.26 74206.92 00:06:57.108 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x0 length 0x80000 00:06:57.108 Nvme2n3 : 5.10 1431.60 5.59 0.00 0.00 88323.88 13712.15 73400.32 00:06:57.108 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x80000 length 0x80000 00:06:57.108 Nvme2n3 : 5.10 1431.99 5.59 0.00 0.00 88234.85 8570.09 77433.30 00:06:57.108 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x0 length 0x20000 00:06:57.108 Nvme3n1 : 5.10 1430.52 5.59 0.00 0.00 88202.67 8872.57 75416.81 00:06:57.108 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:06:57.108 Verification LBA range: start 0x20000 length 0x20000 00:06:57.108 Nvme3n1 : 5.10 1431.58 5.59 0.00 0.00 88099.80 6553.60 79046.50 00:06:57.108 [2024-12-15T02:02:21.873Z] =================================================================================================================== 00:06:57.108 [2024-12-15T02:02:21.873Z] Total : 19895.16 77.72 0.00 0.00 89263.94 3049.94 95581.74 00:06:58.494 00:06:58.494 real 0m7.229s 00:06:58.494 user 0m13.527s 00:06:58.494 sys 0m0.215s 00:06:58.494 ************************************ 00:06:58.494 END TEST bdev_verify 00:06:58.494 ************************************ 00:06:58.494 02:02:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:06:58.494 02:02:22 blockdev_nvme_gpt.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:06:58.494 02:02:23 blockdev_nvme_gpt -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:58.494 02:02:23 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:06:58.494 02:02:23 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:06:58.494 02:02:23 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:06:58.494 ************************************ 00:06:58.494 START TEST bdev_verify_big_io 00:06:58.494 ************************************ 00:06:58.494 02:02:23 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:06:58.494 [2024-12-15 02:02:23.103490] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:06:58.494 [2024-12-15 02:02:23.103607] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63732 ] 00:06:58.754 [2024-12-15 02:02:23.263500] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:06:58.754 [2024-12-15 02:02:23.362789] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:06:58.754 [2024-12-15 02:02:23.362874] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:06:59.325 Running I/O for 5 seconds... 00:07:03.527 1260.00 IOPS, 78.75 MiB/s [2024-12-15T02:02:30.192Z] 1628.50 IOPS, 101.78 MiB/s [2024-12-15T02:02:30.451Z] 1916.67 IOPS, 119.79 MiB/s [2024-12-15T02:02:30.451Z] 2369.50 IOPS, 148.09 MiB/s 00:07:05.686 Latency(us) 00:07:05.686 [2024-12-15T02:02:30.451Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:05.686 Job: Nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x0 length 0xbd0b 00:07:05.686 Nvme0n1 : 5.75 101.94 6.37 0.00 0.00 1193638.79 11695.66 1897115.96 00:07:05.686 Job: Nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0xbd0b length 0xbd0b 00:07:05.686 Nvme0n1 : 5.84 98.67 6.17 0.00 0.00 1233827.93 26617.70 1406705.03 00:07:05.686 Job: Nvme1n1p1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x0 length 0x4ff8 00:07:05.686 Nvme1n1p1 : 5.86 105.46 6.59 0.00 0.00 1119588.51 36901.81 1910021.51 00:07:05.686 Job: Nvme1n1p1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x4ff8 length 0x4ff8 00:07:05.686 Nvme1n1p1 : 6.15 67.63 4.23 0.00 0.00 1731278.86 106470.79 2348810.24 00:07:05.686 Job: Nvme1n1p2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x0 length 0x4ff7 00:07:05.686 Nvme1n1p2 : 6.02 109.57 6.85 0.00 0.00 1039091.15 60494.77 1922927.06 00:07:05.686 Job: Nvme1n1p2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x4ff7 length 0x4ff7 00:07:05.686 Nvme1n1p2 : 6.03 98.91 6.18 0.00 0.00 1155448.06 87112.47 2064888.12 00:07:05.686 Job: Nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x0 length 0x8000 00:07:05.686 Nvme2n1 : 6.03 113.86 7.12 0.00 0.00 975188.61 80256.39 1948738.17 00:07:05.686 Job: Nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x8000 length 0x8000 00:07:05.686 Nvme2n1 : 6.03 106.02 6.63 0.00 0.00 1038339.54 103244.41 1226027.32 00:07:05.686 Job: Nvme2n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x0 length 0x8000 00:07:05.686 Nvme2n2 : 6.10 117.40 7.34 0.00 0.00 910342.07 70577.23 1974549.27 00:07:05.686 Job: Nvme2n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x8000 length 0x8000 00:07:05.686 Nvme2n2 : 6.09 115.46 7.22 0.00 0.00 934423.89 59284.87 1174405.12 00:07:05.686 Job: Nvme2n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x0 length 0x8000 00:07:05.686 Nvme2n3 : 6.24 131.66 8.23 0.00 0.00 785340.96 45774.38 2000360.37 00:07:05.686 Job: Nvme2n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x8000 length 0x8000 00:07:05.686 Nvme2n3 : 6.16 124.76 7.80 0.00 0.00 838433.74 45169.43 1206669.00 00:07:05.686 Job: Nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x0 length 0x2000 00:07:05.686 Nvme3n1 : 6.28 160.59 10.04 0.00 0.00 624709.92 579.74 2026171.47 00:07:05.686 Job: Nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:07:05.686 Verification LBA range: start 0x2000 length 0x2000 00:07:05.686 Nvme3n1 : 6.23 147.91 9.24 0.00 0.00 684578.45 1386.34 1238932.87 00:07:05.686 [2024-12-15T02:02:30.451Z] =================================================================================================================== 00:07:05.686 [2024-12-15T02:02:30.451Z] Total : 1599.84 99.99 0.00 0.00 966262.03 579.74 2348810.24 00:07:07.583 00:07:07.583 real 0m8.904s 00:07:07.583 user 0m16.898s 00:07:07.583 sys 0m0.222s 00:07:07.583 02:02:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:07.583 02:02:31 blockdev_nvme_gpt.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:07:07.583 ************************************ 00:07:07.583 END TEST bdev_verify_big_io 00:07:07.583 ************************************ 00:07:07.583 02:02:31 blockdev_nvme_gpt -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.583 02:02:31 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:07.583 02:02:31 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:07.583 02:02:31 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:07.583 ************************************ 00:07:07.583 START TEST bdev_write_zeroes 00:07:07.583 ************************************ 00:07:07.583 02:02:31 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:07.583 [2024-12-15 02:02:32.036555] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:07.583 [2024-12-15 02:02:32.036969] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63841 ] 00:07:07.583 [2024-12-15 02:02:32.190880] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:07.583 [2024-12-15 02:02:32.284951] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:08.147 Running I/O for 1 seconds... 00:07:09.535 69888.00 IOPS, 273.00 MiB/s 00:07:09.535 Latency(us) 00:07:09.535 [2024-12-15T02:02:34.301Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:07:09.536 Job: Nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.536 Nvme0n1 : 1.02 9967.38 38.94 0.00 0.00 12815.17 10889.06 24601.21 00:07:09.536 Job: Nvme1n1p1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.536 Nvme1n1p1 : 1.02 9955.24 38.89 0.00 0.00 12809.61 10637.00 23996.26 00:07:09.536 Job: Nvme1n1p2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.536 Nvme1n1p2 : 1.02 9943.20 38.84 0.00 0.00 12800.26 10637.00 23693.78 00:07:09.536 Job: Nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.536 Nvme2n1 : 1.02 9932.03 38.80 0.00 0.00 12791.79 10838.65 22887.19 00:07:09.536 Job: Nvme2n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.536 Nvme2n2 : 1.03 9920.80 38.75 0.00 0.00 12781.86 10889.06 22483.89 00:07:09.536 Job: Nvme2n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.536 Nvme2n3 : 1.03 9909.70 38.71 0.00 0.00 12771.93 10889.06 21677.29 00:07:09.536 Job: Nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:07:09.536 Nvme3n1 : 1.03 9898.62 38.67 0.00 0.00 12746.44 9931.22 23492.14 00:07:09.536 [2024-12-15T02:02:34.301Z] =================================================================================================================== 00:07:09.536 [2024-12-15T02:02:34.301Z] Total : 69526.97 271.59 0.00 0.00 12788.15 9931.22 24601.21 00:07:10.105 00:07:10.105 real 0m2.659s 00:07:10.105 user 0m2.381s 00:07:10.105 sys 0m0.164s 00:07:10.105 02:02:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.105 ************************************ 00:07:10.105 END TEST bdev_write_zeroes 00:07:10.105 ************************************ 00:07:10.105 02:02:34 blockdev_nvme_gpt.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:07:10.105 02:02:34 blockdev_nvme_gpt -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:10.105 02:02:34 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:10.105 02:02:34 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.105 02:02:34 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.105 ************************************ 00:07:10.105 START TEST bdev_json_nonenclosed 00:07:10.105 ************************************ 00:07:10.105 02:02:34 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:10.105 [2024-12-15 02:02:34.767368] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:10.105 [2024-12-15 02:02:34.767483] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63894 ] 00:07:10.365 [2024-12-15 02:02:34.924519] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.365 [2024-12-15 02:02:35.018954] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.365 [2024-12-15 02:02:35.019024] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:07:10.365 [2024-12-15 02:02:35.019041] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:10.365 [2024-12-15 02:02:35.019050] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:10.625 00:07:10.625 real 0m0.492s 00:07:10.625 user 0m0.293s 00:07:10.625 sys 0m0.095s 00:07:10.625 ************************************ 00:07:10.625 END TEST bdev_json_nonenclosed 00:07:10.625 02:02:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:10.625 02:02:35 blockdev_nvme_gpt.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:07:10.625 ************************************ 00:07:10.625 02:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:10.625 02:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:07:10.625 02:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:10.625 02:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:10.625 ************************************ 00:07:10.625 START TEST bdev_json_nonarray 00:07:10.625 ************************************ 00:07:10.625 02:02:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:07:10.625 [2024-12-15 02:02:35.320980] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:10.625 [2024-12-15 02:02:35.321089] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63914 ] 00:07:10.884 [2024-12-15 02:02:35.478169] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:10.884 [2024-12-15 02:02:35.574454] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:10.884 [2024-12-15 02:02:35.574534] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:07:10.884 [2024-12-15 02:02:35.574551] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:07:10.884 [2024-12-15 02:02:35.574561] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:07:11.145 00:07:11.145 real 0m0.492s 00:07:11.145 user 0m0.291s 00:07:11.145 sys 0m0.097s 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:07:11.145 ************************************ 00:07:11.145 END TEST bdev_json_nonarray 00:07:11.145 ************************************ 00:07:11.145 02:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@824 -- # [[ gpt == bdev ]] 00:07:11.145 02:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@832 -- # [[ gpt == gpt ]] 00:07:11.145 02:02:35 blockdev_nvme_gpt -- bdev/blockdev.sh@833 -- # run_test bdev_gpt_uuid bdev_gpt_uuid 00:07:11.145 02:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:11.145 02:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:11.145 02:02:35 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:11.145 ************************************ 00:07:11.145 START TEST bdev_gpt_uuid 00:07:11.145 ************************************ 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1129 -- # bdev_gpt_uuid 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@651 -- # local bdev 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@653 -- # start_spdk_tgt 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=63945 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@49 -- # waitforlisten 63945 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@835 -- # '[' -z 63945 ']' 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@840 -- # local max_retries=100 00:07:11.145 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@844 -- # xtrace_disable 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:11.145 02:02:35 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:07:11.145 [2024-12-15 02:02:35.891487] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:11.145 [2024-12-15 02:02:35.891603] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid63945 ] 00:07:11.405 [2024-12-15 02:02:36.051588] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:07:11.405 [2024-12-15 02:02:36.146882] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:07:11.975 02:02:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:07:11.975 02:02:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@868 -- # return 0 00:07:11.975 02:02:36 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@655 -- # rpc_cmd load_config -j /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:11.975 02:02:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:11.975 02:02:36 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:12.547 Some configs were skipped because the RPC state that can call them passed over. 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@656 -- # rpc_cmd bdev_wait_for_examine 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # rpc_cmd bdev_get_bdevs -b 6f89f330-603b-4116-ac73-2ca8eae53030 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@658 -- # bdev='[ 00:07:12.547 { 00:07:12.547 "name": "Nvme1n1p1", 00:07:12.547 "aliases": [ 00:07:12.547 "6f89f330-603b-4116-ac73-2ca8eae53030" 00:07:12.547 ], 00:07:12.547 "product_name": "GPT Disk", 00:07:12.547 "block_size": 4096, 00:07:12.547 "num_blocks": 655104, 00:07:12.547 "uuid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:12.547 "assigned_rate_limits": { 00:07:12.547 "rw_ios_per_sec": 0, 00:07:12.547 "rw_mbytes_per_sec": 0, 00:07:12.547 "r_mbytes_per_sec": 0, 00:07:12.547 "w_mbytes_per_sec": 0 00:07:12.547 }, 00:07:12.547 "claimed": false, 00:07:12.547 "zoned": false, 00:07:12.547 "supported_io_types": { 00:07:12.547 "read": true, 00:07:12.547 "write": true, 00:07:12.547 "unmap": true, 00:07:12.547 "flush": true, 00:07:12.547 "reset": true, 00:07:12.547 "nvme_admin": false, 00:07:12.547 "nvme_io": false, 00:07:12.547 "nvme_io_md": false, 00:07:12.547 "write_zeroes": true, 00:07:12.547 "zcopy": false, 00:07:12.547 "get_zone_info": false, 00:07:12.547 "zone_management": false, 00:07:12.547 "zone_append": false, 00:07:12.547 "compare": true, 00:07:12.547 "compare_and_write": false, 00:07:12.547 "abort": true, 00:07:12.547 "seek_hole": false, 00:07:12.547 "seek_data": false, 00:07:12.547 "copy": true, 00:07:12.547 "nvme_iov_md": false 00:07:12.547 }, 00:07:12.547 "driver_specific": { 00:07:12.547 "gpt": { 00:07:12.547 "base_bdev": "Nvme1n1", 00:07:12.547 "offset_blocks": 256, 00:07:12.547 "partition_type_guid": "6527994e-2c5a-4eec-9613-8f5944074e8b", 00:07:12.547 "unique_partition_guid": "6f89f330-603b-4116-ac73-2ca8eae53030", 00:07:12.547 "partition_name": "SPDK_TEST_first" 00:07:12.547 } 00:07:12.547 } 00:07:12.547 } 00:07:12.547 ]' 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # jq -r length 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@659 -- # [[ 1 == \1 ]] 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # jq -r '.[0].aliases[0]' 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@660 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@661 -- # [[ 6f89f330-603b-4116-ac73-2ca8eae53030 == \6\f\8\9\f\3\3\0\-\6\0\3\b\-\4\1\1\6\-\a\c\7\3\-\2\c\a\8\e\a\e\5\3\0\3\0 ]] 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # rpc_cmd bdev_get_bdevs -b abf1734f-66e5-4c0f-aa29-4021d4d307df 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@563 -- # xtrace_disable 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@663 -- # bdev='[ 00:07:12.547 { 00:07:12.547 "name": "Nvme1n1p2", 00:07:12.547 "aliases": [ 00:07:12.547 "abf1734f-66e5-4c0f-aa29-4021d4d307df" 00:07:12.547 ], 00:07:12.547 "product_name": "GPT Disk", 00:07:12.547 "block_size": 4096, 00:07:12.547 "num_blocks": 655103, 00:07:12.547 "uuid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:12.547 "assigned_rate_limits": { 00:07:12.547 "rw_ios_per_sec": 0, 00:07:12.547 "rw_mbytes_per_sec": 0, 00:07:12.547 "r_mbytes_per_sec": 0, 00:07:12.547 "w_mbytes_per_sec": 0 00:07:12.547 }, 00:07:12.547 "claimed": false, 00:07:12.547 "zoned": false, 00:07:12.547 "supported_io_types": { 00:07:12.547 "read": true, 00:07:12.547 "write": true, 00:07:12.547 "unmap": true, 00:07:12.547 "flush": true, 00:07:12.547 "reset": true, 00:07:12.547 "nvme_admin": false, 00:07:12.547 "nvme_io": false, 00:07:12.547 "nvme_io_md": false, 00:07:12.547 "write_zeroes": true, 00:07:12.547 "zcopy": false, 00:07:12.547 "get_zone_info": false, 00:07:12.547 "zone_management": false, 00:07:12.547 "zone_append": false, 00:07:12.547 "compare": true, 00:07:12.547 "compare_and_write": false, 00:07:12.547 "abort": true, 00:07:12.547 "seek_hole": false, 00:07:12.547 "seek_data": false, 00:07:12.547 "copy": true, 00:07:12.547 "nvme_iov_md": false 00:07:12.547 }, 00:07:12.547 "driver_specific": { 00:07:12.547 "gpt": { 00:07:12.547 "base_bdev": "Nvme1n1", 00:07:12.547 "offset_blocks": 655360, 00:07:12.547 "partition_type_guid": "7c5222bd-8f5d-4087-9c00-bf9843c7b58c", 00:07:12.547 "unique_partition_guid": "abf1734f-66e5-4c0f-aa29-4021d4d307df", 00:07:12.547 "partition_name": "SPDK_TEST_second" 00:07:12.547 } 00:07:12.547 } 00:07:12.547 } 00:07:12.547 ]' 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # jq -r length 00:07:12.547 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@664 -- # [[ 1 == \1 ]] 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # jq -r '.[0].aliases[0]' 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@665 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # jq -r '.[0].driver_specific.gpt.unique_partition_guid' 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@666 -- # [[ abf1734f-66e5-4c0f-aa29-4021d4d307df == \a\b\f\1\7\3\4\f\-\6\6\e\5\-\4\c\0\f\-\a\a\2\9\-\4\0\2\1\d\4\d\3\0\7\d\f ]] 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- bdev/blockdev.sh@668 -- # killprocess 63945 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@954 -- # '[' -z 63945 ']' 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@958 -- # kill -0 63945 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # uname 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:07:12.548 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 63945 00:07:12.808 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:07:12.808 killing process with pid 63945 00:07:12.809 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:07:12.809 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@972 -- # echo 'killing process with pid 63945' 00:07:12.809 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@973 -- # kill 63945 00:07:12.809 02:02:37 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@978 -- # wait 63945 00:07:14.194 00:07:14.194 real 0m2.996s 00:07:14.194 user 0m3.154s 00:07:14.194 sys 0m0.353s 00:07:14.194 02:02:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:14.194 02:02:38 blockdev_nvme_gpt.bdev_gpt_uuid -- common/autotest_common.sh@10 -- # set +x 00:07:14.194 ************************************ 00:07:14.194 END TEST bdev_gpt_uuid 00:07:14.194 ************************************ 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@836 -- # [[ gpt == crypto_sw ]] 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@849 -- # cleanup 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@26 -- # [[ gpt == rbd ]] 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@30 -- # [[ gpt == daos ]] 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@34 -- # [[ gpt = \g\p\t ]] 00:07:14.194 02:02:38 blockdev_nvme_gpt -- bdev/blockdev.sh@35 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:07:14.455 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:14.717 Waiting for block devices as requested 00:07:14.717 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:07:14.717 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:07:14.978 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:07:14.978 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:07:20.262 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:07:20.262 02:02:44 blockdev_nvme_gpt -- bdev/blockdev.sh@36 -- # [[ -b /dev/nvme0n1 ]] 00:07:20.262 02:02:44 blockdev_nvme_gpt -- bdev/blockdev.sh@37 -- # wipefs --all /dev/nvme0n1 00:07:20.262 /dev/nvme0n1: 8 bytes were erased at offset 0x00001000 (gpt): 45 46 49 20 50 41 52 54 00:07:20.262 /dev/nvme0n1: 8 bytes were erased at offset 0x13ffff000 (gpt): 45 46 49 20 50 41 52 54 00:07:20.262 /dev/nvme0n1: 2 bytes were erased at offset 0x000001fe (PMBR): 55 aa 00:07:20.262 /dev/nvme0n1: calling ioctl to re-read partition table: Success 00:07:20.262 02:02:44 blockdev_nvme_gpt -- bdev/blockdev.sh@40 -- # [[ gpt == xnvme ]] 00:07:20.262 00:07:20.262 real 0m55.233s 00:07:20.262 user 1m11.174s 00:07:20.262 sys 0m7.341s 00:07:20.262 02:02:44 blockdev_nvme_gpt -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:20.262 ************************************ 00:07:20.262 END TEST blockdev_nvme_gpt 00:07:20.262 02:02:44 blockdev_nvme_gpt -- common/autotest_common.sh@10 -- # set +x 00:07:20.262 ************************************ 00:07:20.262 02:02:45 -- spdk/autotest.sh@212 -- # run_test nvme /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:20.262 02:02:45 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:20.262 02:02:45 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:20.262 02:02:45 -- common/autotest_common.sh@10 -- # set +x 00:07:20.262 ************************************ 00:07:20.262 START TEST nvme 00:07:20.262 ************************************ 00:07:20.262 02:02:45 nvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme.sh 00:07:20.524 * Looking for test storage... 00:07:20.524 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1711 -- # lcov --version 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:07:20.524 02:02:45 nvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:07:20.524 02:02:45 nvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:07:20.524 02:02:45 nvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:07:20.524 02:02:45 nvme -- scripts/common.sh@336 -- # IFS=.-: 00:07:20.524 02:02:45 nvme -- scripts/common.sh@336 -- # read -ra ver1 00:07:20.524 02:02:45 nvme -- scripts/common.sh@337 -- # IFS=.-: 00:07:20.524 02:02:45 nvme -- scripts/common.sh@337 -- # read -ra ver2 00:07:20.524 02:02:45 nvme -- scripts/common.sh@338 -- # local 'op=<' 00:07:20.524 02:02:45 nvme -- scripts/common.sh@340 -- # ver1_l=2 00:07:20.524 02:02:45 nvme -- scripts/common.sh@341 -- # ver2_l=1 00:07:20.524 02:02:45 nvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:07:20.524 02:02:45 nvme -- scripts/common.sh@344 -- # case "$op" in 00:07:20.524 02:02:45 nvme -- scripts/common.sh@345 -- # : 1 00:07:20.524 02:02:45 nvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:07:20.524 02:02:45 nvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:07:20.524 02:02:45 nvme -- scripts/common.sh@365 -- # decimal 1 00:07:20.524 02:02:45 nvme -- scripts/common.sh@353 -- # local d=1 00:07:20.524 02:02:45 nvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:07:20.524 02:02:45 nvme -- scripts/common.sh@355 -- # echo 1 00:07:20.524 02:02:45 nvme -- scripts/common.sh@365 -- # ver1[v]=1 00:07:20.524 02:02:45 nvme -- scripts/common.sh@366 -- # decimal 2 00:07:20.524 02:02:45 nvme -- scripts/common.sh@353 -- # local d=2 00:07:20.524 02:02:45 nvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:07:20.524 02:02:45 nvme -- scripts/common.sh@355 -- # echo 2 00:07:20.524 02:02:45 nvme -- scripts/common.sh@366 -- # ver2[v]=2 00:07:20.524 02:02:45 nvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:07:20.524 02:02:45 nvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:07:20.524 02:02:45 nvme -- scripts/common.sh@368 -- # return 0 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:07:20.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.524 --rc genhtml_branch_coverage=1 00:07:20.524 --rc genhtml_function_coverage=1 00:07:20.524 --rc genhtml_legend=1 00:07:20.524 --rc geninfo_all_blocks=1 00:07:20.524 --rc geninfo_unexecuted_blocks=1 00:07:20.524 00:07:20.524 ' 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:07:20.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.524 --rc genhtml_branch_coverage=1 00:07:20.524 --rc genhtml_function_coverage=1 00:07:20.524 --rc genhtml_legend=1 00:07:20.524 --rc geninfo_all_blocks=1 00:07:20.524 --rc geninfo_unexecuted_blocks=1 00:07:20.524 00:07:20.524 ' 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:07:20.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.524 --rc genhtml_branch_coverage=1 00:07:20.524 --rc genhtml_function_coverage=1 00:07:20.524 --rc genhtml_legend=1 00:07:20.524 --rc geninfo_all_blocks=1 00:07:20.524 --rc geninfo_unexecuted_blocks=1 00:07:20.524 00:07:20.524 ' 00:07:20.524 02:02:45 nvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:07:20.524 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:07:20.524 --rc genhtml_branch_coverage=1 00:07:20.524 --rc genhtml_function_coverage=1 00:07:20.524 --rc genhtml_legend=1 00:07:20.524 --rc geninfo_all_blocks=1 00:07:20.524 --rc geninfo_unexecuted_blocks=1 00:07:20.524 00:07:20.524 ' 00:07:20.524 02:02:45 nvme -- nvme/nvme.sh@77 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:07:21.095 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:07:21.667 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:07:21.667 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:07:21.667 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:07:21.667 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:07:21.667 02:02:46 nvme -- nvme/nvme.sh@79 -- # uname 00:07:21.667 02:02:46 nvme -- nvme/nvme.sh@79 -- # '[' Linux = Linux ']' 00:07:21.667 02:02:46 nvme -- nvme/nvme.sh@80 -- # trap 'kill_stub -9; exit 1' SIGINT SIGTERM EXIT 00:07:21.667 02:02:46 nvme -- nvme/nvme.sh@81 -- # start_stub '-s 4096 -i 0 -m 0xE' 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1086 -- # _start_stub '-s 4096 -i 0 -m 0xE' 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1072 -- # _randomize_va_space=2 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1073 -- # echo 0 00:07:21.667 Waiting for stub to ready for secondary processes... 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1075 -- # stubpid=64582 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1074 -- # /home/vagrant/spdk_repo/spdk/test/app/stub/stub -s 4096 -i 0 -m 0xE 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1076 -- # echo Waiting for stub to ready for secondary processes... 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1079 -- # [[ -e /proc/64582 ]] 00:07:21.667 02:02:46 nvme -- common/autotest_common.sh@1080 -- # sleep 1s 00:07:21.667 [2024-12-15 02:02:46.329781] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:07:21.667 [2024-12-15 02:02:46.329895] [ DPDK EAL parameters: stub -c 0xE -m 4096 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto --proc-type=primary ] 00:07:22.609 [2024-12-15 02:02:47.080429] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:07:22.609 [2024-12-15 02:02:47.171680] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:07:22.609 [2024-12-15 02:02:47.172033] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:07:22.609 [2024-12-15 02:02:47.172059] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:07:22.609 [2024-12-15 02:02:47.185362] nvme_cuse.c:1408:start_cuse_thread: *NOTICE*: Successfully started cuse thread to poll for admin commands 00:07:22.609 [2024-12-15 02:02:47.185396] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:22.609 [2024-12-15 02:02:47.198728] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0 created 00:07:22.609 [2024-12-15 02:02:47.198818] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme0n1 created 00:07:22.609 [2024-12-15 02:02:47.200645] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:22.609 [2024-12-15 02:02:47.200787] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1 created 00:07:22.609 [2024-12-15 02:02:47.200834] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme1n1 created 00:07:22.609 [2024-12-15 02:02:47.202596] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:22.609 [2024-12-15 02:02:47.202728] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2 created 00:07:22.609 [2024-12-15 02:02:47.202772] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme2n1 created 00:07:22.609 [2024-12-15 02:02:47.204890] nvme_cuse.c:1220:nvme_cuse_start: *NOTICE*: Creating cuse device for controller 00:07:22.609 [2024-12-15 02:02:47.205045] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3 created 00:07:22.609 [2024-12-15 02:02:47.205142] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n1 created 00:07:22.609 [2024-12-15 02:02:47.205185] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n2 created 00:07:22.609 [2024-12-15 02:02:47.205236] nvme_cuse.c: 928:cuse_session_create: *NOTICE*: fuse session for device spdk/nvme3n3 created 00:07:22.609 done. 00:07:22.609 02:02:47 nvme -- common/autotest_common.sh@1077 -- # '[' -e /var/run/spdk_stub0 ']' 00:07:22.609 02:02:47 nvme -- common/autotest_common.sh@1082 -- # echo done. 00:07:22.609 02:02:47 nvme -- nvme/nvme.sh@84 -- # run_test nvme_reset /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:22.609 02:02:47 nvme -- common/autotest_common.sh@1105 -- # '[' 10 -le 1 ']' 00:07:22.609 02:02:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.609 02:02:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.609 ************************************ 00:07:22.609 START TEST nvme_reset 00:07:22.609 ************************************ 00:07:22.609 02:02:47 nvme.nvme_reset -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset -q 64 -w write -o 4096 -t 5 00:07:22.868 Initializing NVMe Controllers 00:07:22.868 Skipping QEMU NVMe SSD at 0000:00:10.0 00:07:22.868 Skipping QEMU NVMe SSD at 0000:00:11.0 00:07:22.868 Skipping QEMU NVMe SSD at 0000:00:13.0 00:07:22.868 Skipping QEMU NVMe SSD at 0000:00:12.0 00:07:22.868 No NVMe controller found, /home/vagrant/spdk_repo/spdk/test/nvme/reset/reset exiting 00:07:22.868 00:07:22.868 real 0m0.225s 00:07:22.868 user 0m0.078s 00:07:22.868 sys 0m0.089s 00:07:22.868 02:02:47 nvme.nvme_reset -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:22.868 02:02:47 nvme.nvme_reset -- common/autotest_common.sh@10 -- # set +x 00:07:22.868 ************************************ 00:07:22.868 END TEST nvme_reset 00:07:22.868 ************************************ 00:07:22.868 02:02:47 nvme -- nvme/nvme.sh@85 -- # run_test nvme_identify nvme_identify 00:07:22.868 02:02:47 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:22.868 02:02:47 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:22.868 02:02:47 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:22.868 ************************************ 00:07:22.868 START TEST nvme_identify 00:07:22.868 ************************************ 00:07:22.868 02:02:47 nvme.nvme_identify -- common/autotest_common.sh@1129 -- # nvme_identify 00:07:22.868 02:02:47 nvme.nvme_identify -- nvme/nvme.sh@12 -- # bdfs=() 00:07:22.868 02:02:47 nvme.nvme_identify -- nvme/nvme.sh@12 -- # local bdfs bdf 00:07:22.868 02:02:47 nvme.nvme_identify -- nvme/nvme.sh@13 -- # bdfs=($(get_nvme_bdfs)) 00:07:22.868 02:02:47 nvme.nvme_identify -- nvme/nvme.sh@13 -- # get_nvme_bdfs 00:07:22.868 02:02:47 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:22.868 02:02:47 nvme.nvme_identify -- common/autotest_common.sh@1498 -- # local bdfs 00:07:22.868 02:02:47 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:22.868 02:02:47 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:22.868 02:02:47 nvme.nvme_identify -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:22.868 02:02:47 nvme.nvme_identify -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:22.868 02:02:47 nvme.nvme_identify -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:22.868 02:02:47 nvme.nvme_identify -- nvme/nvme.sh@14 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -i 0 00:07:23.129 [2024-12-15 02:02:47.803926] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:10.0, 0] process 64603 terminated unexpected 00:07:23.129 ===================================================== 00:07:23.129 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:23.129 ===================================================== 00:07:23.129 Controller Capabilities/Features 00:07:23.129 ================================ 00:07:23.129 Vendor ID: 1b36 00:07:23.129 Subsystem Vendor ID: 1af4 00:07:23.129 Serial Number: 12340 00:07:23.129 Model Number: QEMU NVMe Ctrl 00:07:23.129 Firmware Version: 8.0.0 00:07:23.129 Recommended Arb Burst: 6 00:07:23.129 IEEE OUI Identifier: 00 54 52 00:07:23.129 Multi-path I/O 00:07:23.129 May have multiple subsystem ports: No 00:07:23.129 May have multiple controllers: No 00:07:23.129 Associated with SR-IOV VF: No 00:07:23.129 Max Data Transfer Size: 524288 00:07:23.129 Max Number of Namespaces: 256 00:07:23.129 Max Number of I/O Queues: 64 00:07:23.129 NVMe Specification Version (VS): 1.4 00:07:23.129 NVMe Specification Version (Identify): 1.4 00:07:23.129 Maximum Queue Entries: 2048 00:07:23.129 Contiguous Queues Required: Yes 00:07:23.129 Arbitration Mechanisms Supported 00:07:23.129 Weighted Round Robin: Not Supported 00:07:23.129 Vendor Specific: Not Supported 00:07:23.129 Reset Timeout: 7500 ms 00:07:23.129 Doorbell Stride: 4 bytes 00:07:23.129 NVM Subsystem Reset: Not Supported 00:07:23.129 Command Sets Supported 00:07:23.129 NVM Command Set: Supported 00:07:23.129 Boot Partition: Not Supported 00:07:23.129 Memory Page Size Minimum: 4096 bytes 00:07:23.129 Memory Page Size Maximum: 65536 bytes 00:07:23.129 Persistent Memory Region: Not Supported 00:07:23.129 Optional Asynchronous Events Supported 00:07:23.129 Namespace Attribute Notices: Supported 00:07:23.129 Firmware Activation Notices: Not Supported 00:07:23.129 ANA Change Notices: Not Supported 00:07:23.129 PLE Aggregate Log Change Notices: Not Supported 00:07:23.129 LBA Status Info Alert Notices: Not Supported 00:07:23.129 EGE Aggregate Log Change Notices: Not Supported 00:07:23.129 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.129 Zone Descriptor Change Notices: Not Supported 00:07:23.129 Discovery Log Change Notices: Not Supported 00:07:23.129 Controller Attributes 00:07:23.129 128-bit Host Identifier: Not Supported 00:07:23.129 Non-Operational Permissive Mode: Not Supported 00:07:23.129 NVM Sets: Not Supported 00:07:23.129 Read Recovery Levels: Not Supported 00:07:23.129 Endurance Groups: Not Supported 00:07:23.129 Predictable Latency Mode: Not Supported 00:07:23.129 Traffic Based Keep ALive: Not Supported 00:07:23.129 Namespace Granularity: Not Supported 00:07:23.129 SQ Associations: Not Supported 00:07:23.129 UUID List: Not Supported 00:07:23.129 Multi-Domain Subsystem: Not Supported 00:07:23.129 Fixed Capacity Management: Not Supported 00:07:23.129 Variable Capacity Management: Not Supported 00:07:23.129 Delete Endurance Group: Not Supported 00:07:23.129 Delete NVM Set: Not Supported 00:07:23.129 Extended LBA Formats Supported: Supported 00:07:23.129 Flexible Data Placement Supported: Not Supported 00:07:23.129 00:07:23.129 Controller Memory Buffer Support 00:07:23.129 ================================ 00:07:23.129 Supported: No 00:07:23.129 00:07:23.129 Persistent Memory Region Support 00:07:23.129 ================================ 00:07:23.129 Supported: No 00:07:23.129 00:07:23.129 Admin Command Set Attributes 00:07:23.129 ============================ 00:07:23.129 Security Send/Receive: Not Supported 00:07:23.129 Format NVM: Supported 00:07:23.129 Firmware Activate/Download: Not Supported 00:07:23.129 Namespace Management: Supported 00:07:23.129 Device Self-Test: Not Supported 00:07:23.129 Directives: Supported 00:07:23.129 NVMe-MI: Not Supported 00:07:23.129 Virtualization Management: Not Supported 00:07:23.129 Doorbell Buffer Config: Supported 00:07:23.129 Get LBA Status Capability: Not Supported 00:07:23.129 Command & Feature Lockdown Capability: Not Supported 00:07:23.129 Abort Command Limit: 4 00:07:23.129 Async Event Request Limit: 4 00:07:23.129 Number of Firmware Slots: N/A 00:07:23.129 Firmware Slot 1 Read-Only: N/A 00:07:23.129 Firmware Activation Without Reset: N/A 00:07:23.129 Multiple Update Detection Support: N/A 00:07:23.129 Firmware Update Granularity: No Information Provided 00:07:23.129 Per-Namespace SMART Log: Yes 00:07:23.129 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.129 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:23.130 Command Effects Log Page: Supported 00:07:23.130 Get Log Page Extended Data: Supported 00:07:23.130 Telemetry Log Pages: Not Supported 00:07:23.130 Persistent Event Log Pages: Not Supported 00:07:23.130 Supported Log Pages Log Page: May Support 00:07:23.130 Commands Supported & Effects Log Page: Not Supported 00:07:23.130 Feature Identifiers & Effects Log Page:May Support 00:07:23.130 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.130 Data Area 4 for Telemetry Log: Not Supported 00:07:23.130 Error Log Page Entries Supported: 1 00:07:23.130 Keep Alive: Not Supported 00:07:23.130 00:07:23.130 NVM Command Set Attributes 00:07:23.130 ========================== 00:07:23.130 Submission Queue Entry Size 00:07:23.130 Max: 64 00:07:23.130 Min: 64 00:07:23.130 Completion Queue Entry Size 00:07:23.130 Max: 16 00:07:23.130 Min: 16 00:07:23.130 Number of Namespaces: 256 00:07:23.130 Compare Command: Supported 00:07:23.130 Write Uncorrectable Command: Not Supported 00:07:23.130 Dataset Management Command: Supported 00:07:23.130 Write Zeroes Command: Supported 00:07:23.130 Set Features Save Field: Supported 00:07:23.130 Reservations: Not Supported 00:07:23.130 Timestamp: Supported 00:07:23.130 Copy: Supported 00:07:23.130 Volatile Write Cache: Present 00:07:23.130 Atomic Write Unit (Normal): 1 00:07:23.130 Atomic Write Unit (PFail): 1 00:07:23.130 Atomic Compare & Write Unit: 1 00:07:23.130 Fused Compare & Write: Not Supported 00:07:23.130 Scatter-Gather List 00:07:23.130 SGL Command Set: Supported 00:07:23.130 SGL Keyed: Not Supported 00:07:23.130 SGL Bit Bucket Descriptor: Not Supported 00:07:23.130 SGL Metadata Pointer: Not Supported 00:07:23.130 Oversized SGL: Not Supported 00:07:23.130 SGL Metadata Address: Not Supported 00:07:23.130 SGL Offset: Not Supported 00:07:23.130 Transport SGL Data Block: Not Supported 00:07:23.130 Replay Protected Memory Block: Not Supported 00:07:23.130 00:07:23.130 Firmware Slot Information 00:07:23.130 ========================= 00:07:23.130 Active slot: 1 00:07:23.130 Slot 1 Firmware Revision: 1.0 00:07:23.130 00:07:23.130 00:07:23.130 Commands Supported and Effects 00:07:23.130 ============================== 00:07:23.130 Admin Commands 00:07:23.130 -------------- 00:07:23.130 Delete I/O Submission Queue (00h): Supported 00:07:23.130 Create I/O Submission Queue (01h): Supported 00:07:23.130 Get Log Page (02h): Supported 00:07:23.130 Delete I/O Completion Queue (04h): Supported 00:07:23.130 Create I/O Completion Queue (05h): Supported 00:07:23.130 Identify (06h): Supported 00:07:23.130 Abort (08h): Supported 00:07:23.130 Set Features (09h): Supported 00:07:23.130 Get Features (0Ah): Supported 00:07:23.130 Asynchronous Event Request (0Ch): Supported 00:07:23.130 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.130 Directive Send (19h): Supported 00:07:23.130 Directive Receive (1Ah): Supported 00:07:23.130 Virtualization Management (1Ch): Supported 00:07:23.130 Doorbell Buffer Config (7Ch): Supported 00:07:23.130 Format NVM (80h): Supported LBA-Change 00:07:23.130 I/O Commands 00:07:23.130 ------------ 00:07:23.130 Flush (00h): Supported LBA-Change 00:07:23.130 Write (01h): Supported LBA-Change 00:07:23.130 Read (02h): Supported 00:07:23.130 Compare (05h): Supported 00:07:23.130 Write Zeroes (08h): Supported LBA-Change 00:07:23.130 Dataset Management (09h): Supported LBA-Change 00:07:23.130 Unknown (0Ch): Supported 00:07:23.130 Unknown (12h): Supported 00:07:23.130 Copy (19h): Supported LBA-Change 00:07:23.130 Unknown (1Dh): Supported LBA-Change 00:07:23.130 00:07:23.130 Error Log 00:07:23.130 ========= 00:07:23.130 00:07:23.130 Arbitration 00:07:23.130 =========== 00:07:23.130 Arbitration Burst: no limit 00:07:23.130 00:07:23.130 Power Management 00:07:23.130 ================ 00:07:23.130 Number of Power States: 1 00:07:23.130 Current Power State: Power State #0 00:07:23.130 Power State #0: 00:07:23.130 Max Power: 25.00 W 00:07:23.130 Non-Operational State: Operational 00:07:23.130 Entry Latency: 16 microseconds 00:07:23.130 Exit Latency: 4 microseconds 00:07:23.130 Relative Read Throughput: 0 00:07:23.130 Relative Read Latency: 0 00:07:23.130 Relative Write Throughput: 0 00:07:23.130 Relative Write Latency: 0 00:07:23.130 Idle Power[2024-12-15 02:02:47.805471] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:11.0, 0] process 64603 terminated unexpected 00:07:23.130 : Not Reported 00:07:23.130 Active Power: Not Reported 00:07:23.130 Non-Operational Permissive Mode: Not Supported 00:07:23.130 00:07:23.130 Health Information 00:07:23.130 ================== 00:07:23.130 Critical Warnings: 00:07:23.130 Available Spare Space: OK 00:07:23.130 Temperature: OK 00:07:23.130 Device Reliability: OK 00:07:23.130 Read Only: No 00:07:23.130 Volatile Memory Backup: OK 00:07:23.130 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.130 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.130 Available Spare: 0% 00:07:23.130 Available Spare Threshold: 0% 00:07:23.130 Life Percentage Used: 0% 00:07:23.130 Data Units Read: 651 00:07:23.130 Data Units Written: 579 00:07:23.130 Host Read Commands: 34382 00:07:23.130 Host Write Commands: 34168 00:07:23.130 Controller Busy Time: 0 minutes 00:07:23.130 Power Cycles: 0 00:07:23.130 Power On Hours: 0 hours 00:07:23.130 Unsafe Shutdowns: 0 00:07:23.130 Unrecoverable Media Errors: 0 00:07:23.130 Lifetime Error Log Entries: 0 00:07:23.130 Warning Temperature Time: 0 minutes 00:07:23.130 Critical Temperature Time: 0 minutes 00:07:23.130 00:07:23.130 Number of Queues 00:07:23.130 ================ 00:07:23.130 Number of I/O Submission Queues: 64 00:07:23.130 Number of I/O Completion Queues: 64 00:07:23.130 00:07:23.130 ZNS Specific Controller Data 00:07:23.130 ============================ 00:07:23.130 Zone Append Size Limit: 0 00:07:23.130 00:07:23.130 00:07:23.130 Active Namespaces 00:07:23.130 ================= 00:07:23.130 Namespace ID:1 00:07:23.130 Error Recovery Timeout: Unlimited 00:07:23.130 Command Set Identifier: NVM (00h) 00:07:23.130 Deallocate: Supported 00:07:23.130 Deallocated/Unwritten Error: Supported 00:07:23.130 Deallocated Read Value: All 0x00 00:07:23.130 Deallocate in Write Zeroes: Not Supported 00:07:23.130 Deallocated Guard Field: 0xFFFF 00:07:23.130 Flush: Supported 00:07:23.130 Reservation: Not Supported 00:07:23.130 Metadata Transferred as: Separate Metadata Buffer 00:07:23.130 Namespace Sharing Capabilities: Private 00:07:23.130 Size (in LBAs): 1548666 (5GiB) 00:07:23.130 Capacity (in LBAs): 1548666 (5GiB) 00:07:23.130 Utilization (in LBAs): 1548666 (5GiB) 00:07:23.130 Thin Provisioning: Not Supported 00:07:23.130 Per-NS Atomic Units: No 00:07:23.130 Maximum Single Source Range Length: 128 00:07:23.130 Maximum Copy Length: 128 00:07:23.130 Maximum Source Range Count: 128 00:07:23.130 NGUID/EUI64 Never Reused: No 00:07:23.130 Namespace Write Protected: No 00:07:23.130 Number of LBA Formats: 8 00:07:23.130 Current LBA Format: LBA Format #07 00:07:23.130 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.130 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.130 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.130 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.130 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.130 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.130 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.130 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.130 00:07:23.130 NVM Specific Namespace Data 00:07:23.130 =========================== 00:07:23.130 Logical Block Storage Tag Mask: 0 00:07:23.130 Protection Information Capabilities: 00:07:23.130 16b Guard Protection Information Storage Tag Support: No 00:07:23.130 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.130 Storage Tag Check Read Support: No 00:07:23.130 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.130 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.130 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.130 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.130 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.130 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.130 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.130 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.130 ===================================================== 00:07:23.130 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:23.130 ===================================================== 00:07:23.130 Controller Capabilities/Features 00:07:23.130 ================================ 00:07:23.130 Vendor ID: 1b36 00:07:23.131 Subsystem Vendor ID: 1af4 00:07:23.131 Serial Number: 12341 00:07:23.131 Model Number: QEMU NVMe Ctrl 00:07:23.131 Firmware Version: 8.0.0 00:07:23.131 Recommended Arb Burst: 6 00:07:23.131 IEEE OUI Identifier: 00 54 52 00:07:23.131 Multi-path I/O 00:07:23.131 May have multiple subsystem ports: No 00:07:23.131 May have multiple controllers: No 00:07:23.131 Associated with SR-IOV VF: No 00:07:23.131 Max Data Transfer Size: 524288 00:07:23.131 Max Number of Namespaces: 256 00:07:23.131 Max Number of I/O Queues: 64 00:07:23.131 NVMe Specification Version (VS): 1.4 00:07:23.131 NVMe Specification Version (Identify): 1.4 00:07:23.131 Maximum Queue Entries: 2048 00:07:23.131 Contiguous Queues Required: Yes 00:07:23.131 Arbitration Mechanisms Supported 00:07:23.131 Weighted Round Robin: Not Supported 00:07:23.131 Vendor Specific: Not Supported 00:07:23.131 Reset Timeout: 7500 ms 00:07:23.131 Doorbell Stride: 4 bytes 00:07:23.131 NVM Subsystem Reset: Not Supported 00:07:23.131 Command Sets Supported 00:07:23.131 NVM Command Set: Supported 00:07:23.131 Boot Partition: Not Supported 00:07:23.131 Memory Page Size Minimum: 4096 bytes 00:07:23.131 Memory Page Size Maximum: 65536 bytes 00:07:23.131 Persistent Memory Region: Not Supported 00:07:23.131 Optional Asynchronous Events Supported 00:07:23.131 Namespace Attribute Notices: Supported 00:07:23.131 Firmware Activation Notices: Not Supported 00:07:23.131 ANA Change Notices: Not Supported 00:07:23.131 PLE Aggregate Log Change Notices: Not Supported 00:07:23.131 LBA Status Info Alert Notices: Not Supported 00:07:23.131 EGE Aggregate Log Change Notices: Not Supported 00:07:23.131 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.131 Zone Descriptor Change Notices: Not Supported 00:07:23.131 Discovery Log Change Notices: Not Supported 00:07:23.131 Controller Attributes 00:07:23.131 128-bit Host Identifier: Not Supported 00:07:23.131 Non-Operational Permissive Mode: Not Supported 00:07:23.131 NVM Sets: Not Supported 00:07:23.131 Read Recovery Levels: Not Supported 00:07:23.131 Endurance Groups: Not Supported 00:07:23.131 Predictable Latency Mode: Not Supported 00:07:23.131 Traffic Based Keep ALive: Not Supported 00:07:23.131 Namespace Granularity: Not Supported 00:07:23.131 SQ Associations: Not Supported 00:07:23.131 UUID List: Not Supported 00:07:23.131 Multi-Domain Subsystem: Not Supported 00:07:23.131 Fixed Capacity Management: Not Supported 00:07:23.131 Variable Capacity Management: Not Supported 00:07:23.131 Delete Endurance Group: Not Supported 00:07:23.131 Delete NVM Set: Not Supported 00:07:23.131 Extended LBA Formats Supported: Supported 00:07:23.131 Flexible Data Placement Supported: Not Supported 00:07:23.131 00:07:23.131 Controller Memory Buffer Support 00:07:23.131 ================================ 00:07:23.131 Supported: No 00:07:23.131 00:07:23.131 Persistent Memory Region Support 00:07:23.131 ================================ 00:07:23.131 Supported: No 00:07:23.131 00:07:23.131 Admin Command Set Attributes 00:07:23.131 ============================ 00:07:23.131 Security Send/Receive: Not Supported 00:07:23.131 Format NVM: Supported 00:07:23.131 Firmware Activate/Download: Not Supported 00:07:23.131 Namespace Management: Supported 00:07:23.131 Device Self-Test: Not Supported 00:07:23.131 Directives: Supported 00:07:23.131 NVMe-MI: Not Supported 00:07:23.131 Virtualization Management: Not Supported 00:07:23.131 Doorbell Buffer Config: Supported 00:07:23.131 Get LBA Status Capability: Not Supported 00:07:23.131 Command & Feature Lockdown Capability: Not Supported 00:07:23.131 Abort Command Limit: 4 00:07:23.131 Async Event Request Limit: 4 00:07:23.131 Number of Firmware Slots: N/A 00:07:23.131 Firmware Slot 1 Read-Only: N/A 00:07:23.131 Firmware Activation Without Reset: N/A 00:07:23.131 Multiple Update Detection Support: N/A 00:07:23.131 Firmware Update Granularity: No Information Provided 00:07:23.131 Per-Namespace SMART Log: Yes 00:07:23.131 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.131 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:23.131 Command Effects Log Page: Supported 00:07:23.131 Get Log Page Extended Data: Supported 00:07:23.131 Telemetry Log Pages: Not Supported 00:07:23.131 Persistent Event Log Pages: Not Supported 00:07:23.131 Supported Log Pages Log Page: May Support 00:07:23.131 Commands Supported & Effects Log Page: Not Supported 00:07:23.131 Feature Identifiers & Effects Log Page:May Support 00:07:23.131 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.131 Data Area 4 for Telemetry Log: Not Supported 00:07:23.131 Error Log Page Entries Supported: 1 00:07:23.131 Keep Alive: Not Supported 00:07:23.131 00:07:23.131 NVM Command Set Attributes 00:07:23.131 ========================== 00:07:23.131 Submission Queue Entry Size 00:07:23.131 Max: 64 00:07:23.131 Min: 64 00:07:23.131 Completion Queue Entry Size 00:07:23.131 Max: 16 00:07:23.131 Min: 16 00:07:23.131 Number of Namespaces: 256 00:07:23.131 Compare Command: Supported 00:07:23.131 Write Uncorrectable Command: Not Supported 00:07:23.131 Dataset Management Command: Supported 00:07:23.131 Write Zeroes Command: Supported 00:07:23.131 Set Features Save Field: Supported 00:07:23.131 Reservations: Not Supported 00:07:23.131 Timestamp: Supported 00:07:23.131 Copy: Supported 00:07:23.131 Volatile Write Cache: Present 00:07:23.131 Atomic Write Unit (Normal): 1 00:07:23.131 Atomic Write Unit (PFail): 1 00:07:23.131 Atomic Compare & Write Unit: 1 00:07:23.131 Fused Compare & Write: Not Supported 00:07:23.131 Scatter-Gather List 00:07:23.131 SGL Command Set: Supported 00:07:23.131 SGL Keyed: Not Supported 00:07:23.131 SGL Bit Bucket Descriptor: Not Supported 00:07:23.131 SGL Metadata Pointer: Not Supported 00:07:23.131 Oversized SGL: Not Supported 00:07:23.131 SGL Metadata Address: Not Supported 00:07:23.131 SGL Offset: Not Supported 00:07:23.131 Transport SGL Data Block: Not Supported 00:07:23.131 Replay Protected Memory Block: Not Supported 00:07:23.131 00:07:23.131 Firmware Slot Information 00:07:23.131 ========================= 00:07:23.131 Active slot: 1 00:07:23.131 Slot 1 Firmware Revision: 1.0 00:07:23.131 00:07:23.131 00:07:23.131 Commands Supported and Effects 00:07:23.131 ============================== 00:07:23.131 Admin Commands 00:07:23.131 -------------- 00:07:23.131 Delete I/O Submission Queue (00h): Supported 00:07:23.131 Create I/O Submission Queue (01h): Supported 00:07:23.131 Get Log Page (02h): Supported 00:07:23.131 Delete I/O Completion Queue (04h): Supported 00:07:23.131 Create I/O Completion Queue (05h): Supported 00:07:23.131 Identify (06h): Supported 00:07:23.131 Abort (08h): Supported 00:07:23.131 Set Features (09h): Supported 00:07:23.131 Get Features (0Ah): Supported 00:07:23.131 Asynchronous Event Request (0Ch): Supported 00:07:23.131 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.131 Directive Send (19h): Supported 00:07:23.131 Directive Receive (1Ah): Supported 00:07:23.131 Virtualization Management (1Ch): Supported 00:07:23.131 Doorbell Buffer Config (7Ch): Supported 00:07:23.131 Format NVM (80h): Supported LBA-Change 00:07:23.131 I/O Commands 00:07:23.131 ------------ 00:07:23.131 Flush (00h): Supported LBA-Change 00:07:23.131 Write (01h): Supported LBA-Change 00:07:23.131 Read (02h): Supported 00:07:23.131 Compare (05h): Supported 00:07:23.131 Write Zeroes (08h): Supported LBA-Change 00:07:23.131 Dataset Management (09h): Supported LBA-Change 00:07:23.131 Unknown (0Ch): Supported 00:07:23.131 Unknown (12h): Supported 00:07:23.131 Copy (19h): Supported LBA-Change 00:07:23.131 Unknown (1Dh): Supported LBA-Change 00:07:23.131 00:07:23.131 Error Log 00:07:23.131 ========= 00:07:23.131 00:07:23.131 Arbitration 00:07:23.131 =========== 00:07:23.131 Arbitration Burst: no limit 00:07:23.131 00:07:23.131 Power Management 00:07:23.131 ================ 00:07:23.131 Number of Power States: 1 00:07:23.131 Current Power State: Power State #0 00:07:23.131 Power State #0: 00:07:23.131 Max Power: 25.00 W 00:07:23.131 Non-Operational State: Operational 00:07:23.131 Entry Latency: 16 microseconds 00:07:23.131 Exit Latency: 4 microseconds 00:07:23.131 Relative Read Throughput: 0 00:07:23.131 Relative Read Latency: 0 00:07:23.131 Relative Write Throughput: 0 00:07:23.131 Relative Write Latency: 0 00:07:23.131 Idle Power: Not Reported 00:07:23.131 Active Power: Not Reported 00:07:23.131 Non-Operational Permissive Mode: Not Supported 00:07:23.131 00:07:23.131 Health Information 00:07:23.131 ================== 00:07:23.132 Critical Warnings: 00:07:23.132 Available Spare Space: OK 00:07:23.132 Temperature: [2024-12-15 02:02:47.806266] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:13.0, 0] process 64603 terminated unexpected 00:07:23.132 OK 00:07:23.132 Device Reliability: OK 00:07:23.132 Read Only: No 00:07:23.132 Volatile Memory Backup: OK 00:07:23.132 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.132 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.132 Available Spare: 0% 00:07:23.132 Available Spare Threshold: 0% 00:07:23.132 Life Percentage Used: 0% 00:07:23.132 Data Units Read: 985 00:07:23.132 Data Units Written: 852 00:07:23.132 Host Read Commands: 50946 00:07:23.132 Host Write Commands: 49744 00:07:23.132 Controller Busy Time: 0 minutes 00:07:23.132 Power Cycles: 0 00:07:23.132 Power On Hours: 0 hours 00:07:23.132 Unsafe Shutdowns: 0 00:07:23.132 Unrecoverable Media Errors: 0 00:07:23.132 Lifetime Error Log Entries: 0 00:07:23.132 Warning Temperature Time: 0 minutes 00:07:23.132 Critical Temperature Time: 0 minutes 00:07:23.132 00:07:23.132 Number of Queues 00:07:23.132 ================ 00:07:23.132 Number of I/O Submission Queues: 64 00:07:23.132 Number of I/O Completion Queues: 64 00:07:23.132 00:07:23.132 ZNS Specific Controller Data 00:07:23.132 ============================ 00:07:23.132 Zone Append Size Limit: 0 00:07:23.132 00:07:23.132 00:07:23.132 Active Namespaces 00:07:23.132 ================= 00:07:23.132 Namespace ID:1 00:07:23.132 Error Recovery Timeout: Unlimited 00:07:23.132 Command Set Identifier: NVM (00h) 00:07:23.132 Deallocate: Supported 00:07:23.132 Deallocated/Unwritten Error: Supported 00:07:23.132 Deallocated Read Value: All 0x00 00:07:23.132 Deallocate in Write Zeroes: Not Supported 00:07:23.132 Deallocated Guard Field: 0xFFFF 00:07:23.132 Flush: Supported 00:07:23.132 Reservation: Not Supported 00:07:23.132 Namespace Sharing Capabilities: Private 00:07:23.132 Size (in LBAs): 1310720 (5GiB) 00:07:23.132 Capacity (in LBAs): 1310720 (5GiB) 00:07:23.132 Utilization (in LBAs): 1310720 (5GiB) 00:07:23.132 Thin Provisioning: Not Supported 00:07:23.132 Per-NS Atomic Units: No 00:07:23.132 Maximum Single Source Range Length: 128 00:07:23.132 Maximum Copy Length: 128 00:07:23.132 Maximum Source Range Count: 128 00:07:23.132 NGUID/EUI64 Never Reused: No 00:07:23.132 Namespace Write Protected: No 00:07:23.132 Number of LBA Formats: 8 00:07:23.132 Current LBA Format: LBA Format #04 00:07:23.132 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.132 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.132 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.132 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.132 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.132 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.132 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.132 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.132 00:07:23.132 NVM Specific Namespace Data 00:07:23.132 =========================== 00:07:23.132 Logical Block Storage Tag Mask: 0 00:07:23.132 Protection Information Capabilities: 00:07:23.132 16b Guard Protection Information Storage Tag Support: No 00:07:23.132 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.132 Storage Tag Check Read Support: No 00:07:23.132 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.132 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.132 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.132 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.132 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.132 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.132 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.132 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.132 ===================================================== 00:07:23.132 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:23.132 ===================================================== 00:07:23.132 Controller Capabilities/Features 00:07:23.132 ================================ 00:07:23.132 Vendor ID: 1b36 00:07:23.132 Subsystem Vendor ID: 1af4 00:07:23.132 Serial Number: 12343 00:07:23.132 Model Number: QEMU NVMe Ctrl 00:07:23.132 Firmware Version: 8.0.0 00:07:23.132 Recommended Arb Burst: 6 00:07:23.132 IEEE OUI Identifier: 00 54 52 00:07:23.132 Multi-path I/O 00:07:23.132 May have multiple subsystem ports: No 00:07:23.132 May have multiple controllers: Yes 00:07:23.132 Associated with SR-IOV VF: No 00:07:23.132 Max Data Transfer Size: 524288 00:07:23.132 Max Number of Namespaces: 256 00:07:23.132 Max Number of I/O Queues: 64 00:07:23.132 NVMe Specification Version (VS): 1.4 00:07:23.132 NVMe Specification Version (Identify): 1.4 00:07:23.132 Maximum Queue Entries: 2048 00:07:23.132 Contiguous Queues Required: Yes 00:07:23.132 Arbitration Mechanisms Supported 00:07:23.132 Weighted Round Robin: Not Supported 00:07:23.132 Vendor Specific: Not Supported 00:07:23.132 Reset Timeout: 7500 ms 00:07:23.132 Doorbell Stride: 4 bytes 00:07:23.132 NVM Subsystem Reset: Not Supported 00:07:23.132 Command Sets Supported 00:07:23.132 NVM Command Set: Supported 00:07:23.132 Boot Partition: Not Supported 00:07:23.132 Memory Page Size Minimum: 4096 bytes 00:07:23.132 Memory Page Size Maximum: 65536 bytes 00:07:23.132 Persistent Memory Region: Not Supported 00:07:23.132 Optional Asynchronous Events Supported 00:07:23.132 Namespace Attribute Notices: Supported 00:07:23.132 Firmware Activation Notices: Not Supported 00:07:23.132 ANA Change Notices: Not Supported 00:07:23.132 PLE Aggregate Log Change Notices: Not Supported 00:07:23.132 LBA Status Info Alert Notices: Not Supported 00:07:23.132 EGE Aggregate Log Change Notices: Not Supported 00:07:23.132 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.132 Zone Descriptor Change Notices: Not Supported 00:07:23.132 Discovery Log Change Notices: Not Supported 00:07:23.132 Controller Attributes 00:07:23.132 128-bit Host Identifier: Not Supported 00:07:23.132 Non-Operational Permissive Mode: Not Supported 00:07:23.132 NVM Sets: Not Supported 00:07:23.132 Read Recovery Levels: Not Supported 00:07:23.132 Endurance Groups: Supported 00:07:23.132 Predictable Latency Mode: Not Supported 00:07:23.132 Traffic Based Keep ALive: Not Supported 00:07:23.132 Namespace Granularity: Not Supported 00:07:23.132 SQ Associations: Not Supported 00:07:23.132 UUID List: Not Supported 00:07:23.132 Multi-Domain Subsystem: Not Supported 00:07:23.132 Fixed Capacity Management: Not Supported 00:07:23.132 Variable Capacity Management: Not Supported 00:07:23.132 Delete Endurance Group: Not Supported 00:07:23.132 Delete NVM Set: Not Supported 00:07:23.132 Extended LBA Formats Supported: Supported 00:07:23.132 Flexible Data Placement Supported: Supported 00:07:23.132 00:07:23.132 Controller Memory Buffer Support 00:07:23.132 ================================ 00:07:23.132 Supported: No 00:07:23.132 00:07:23.132 Persistent Memory Region Support 00:07:23.132 ================================ 00:07:23.132 Supported: No 00:07:23.132 00:07:23.132 Admin Command Set Attributes 00:07:23.132 ============================ 00:07:23.132 Security Send/Receive: Not Supported 00:07:23.132 Format NVM: Supported 00:07:23.132 Firmware Activate/Download: Not Supported 00:07:23.132 Namespace Management: Supported 00:07:23.132 Device Self-Test: Not Supported 00:07:23.132 Directives: Supported 00:07:23.132 NVMe-MI: Not Supported 00:07:23.132 Virtualization Management: Not Supported 00:07:23.132 Doorbell Buffer Config: Supported 00:07:23.132 Get LBA Status Capability: Not Supported 00:07:23.132 Command & Feature Lockdown Capability: Not Supported 00:07:23.132 Abort Command Limit: 4 00:07:23.132 Async Event Request Limit: 4 00:07:23.132 Number of Firmware Slots: N/A 00:07:23.132 Firmware Slot 1 Read-Only: N/A 00:07:23.132 Firmware Activation Without Reset: N/A 00:07:23.132 Multiple Update Detection Support: N/A 00:07:23.132 Firmware Update Granularity: No Information Provided 00:07:23.132 Per-Namespace SMART Log: Yes 00:07:23.132 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.132 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:23.132 Command Effects Log Page: Supported 00:07:23.132 Get Log Page Extended Data: Supported 00:07:23.132 Telemetry Log Pages: Not Supported 00:07:23.132 Persistent Event Log Pages: Not Supported 00:07:23.132 Supported Log Pages Log Page: May Support 00:07:23.132 Commands Supported & Effects Log Page: Not Supported 00:07:23.132 Feature Identifiers & Effects Log Page:May Support 00:07:23.132 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.132 Data Area 4 for Telemetry Log: Not Supported 00:07:23.132 Error Log Page Entries Supported: 1 00:07:23.132 Keep Alive: Not Supported 00:07:23.132 00:07:23.132 NVM Command Set Attributes 00:07:23.132 ========================== 00:07:23.132 Submission Queue Entry Size 00:07:23.132 Max: 64 00:07:23.132 Min: 64 00:07:23.132 Completion Queue Entry Size 00:07:23.132 Max: 16 00:07:23.133 Min: 16 00:07:23.133 Number of Namespaces: 256 00:07:23.133 Compare Command: Supported 00:07:23.133 Write Uncorrectable Command: Not Supported 00:07:23.133 Dataset Management Command: Supported 00:07:23.133 Write Zeroes Command: Supported 00:07:23.133 Set Features Save Field: Supported 00:07:23.133 Reservations: Not Supported 00:07:23.133 Timestamp: Supported 00:07:23.133 Copy: Supported 00:07:23.133 Volatile Write Cache: Present 00:07:23.133 Atomic Write Unit (Normal): 1 00:07:23.133 Atomic Write Unit (PFail): 1 00:07:23.133 Atomic Compare & Write Unit: 1 00:07:23.133 Fused Compare & Write: Not Supported 00:07:23.133 Scatter-Gather List 00:07:23.133 SGL Command Set: Supported 00:07:23.133 SGL Keyed: Not Supported 00:07:23.133 SGL Bit Bucket Descriptor: Not Supported 00:07:23.133 SGL Metadata Pointer: Not Supported 00:07:23.133 Oversized SGL: Not Supported 00:07:23.133 SGL Metadata Address: Not Supported 00:07:23.133 SGL Offset: Not Supported 00:07:23.133 Transport SGL Data Block: Not Supported 00:07:23.133 Replay Protected Memory Block: Not Supported 00:07:23.133 00:07:23.133 Firmware Slot Information 00:07:23.133 ========================= 00:07:23.133 Active slot: 1 00:07:23.133 Slot 1 Firmware Revision: 1.0 00:07:23.133 00:07:23.133 00:07:23.133 Commands Supported and Effects 00:07:23.133 ============================== 00:07:23.133 Admin Commands 00:07:23.133 -------------- 00:07:23.133 Delete I/O Submission Queue (00h): Supported 00:07:23.133 Create I/O Submission Queue (01h): Supported 00:07:23.133 Get Log Page (02h): Supported 00:07:23.133 Delete I/O Completion Queue (04h): Supported 00:07:23.133 Create I/O Completion Queue (05h): Supported 00:07:23.133 Identify (06h): Supported 00:07:23.133 Abort (08h): Supported 00:07:23.133 Set Features (09h): Supported 00:07:23.133 Get Features (0Ah): Supported 00:07:23.133 Asynchronous Event Request (0Ch): Supported 00:07:23.133 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.133 Directive Send (19h): Supported 00:07:23.133 Directive Receive (1Ah): Supported 00:07:23.133 Virtualization Management (1Ch): Supported 00:07:23.133 Doorbell Buffer Config (7Ch): Supported 00:07:23.133 Format NVM (80h): Supported LBA-Change 00:07:23.133 I/O Commands 00:07:23.133 ------------ 00:07:23.133 Flush (00h): Supported LBA-Change 00:07:23.133 Write (01h): Supported LBA-Change 00:07:23.133 Read (02h): Supported 00:07:23.133 Compare (05h): Supported 00:07:23.133 Write Zeroes (08h): Supported LBA-Change 00:07:23.133 Dataset Management (09h): Supported LBA-Change 00:07:23.133 Unknown (0Ch): Supported 00:07:23.133 Unknown (12h): Supported 00:07:23.133 Copy (19h): Supported LBA-Change 00:07:23.133 Unknown (1Dh): Supported LBA-Change 00:07:23.133 00:07:23.133 Error Log 00:07:23.133 ========= 00:07:23.133 00:07:23.133 Arbitration 00:07:23.133 =========== 00:07:23.133 Arbitration Burst: no limit 00:07:23.133 00:07:23.133 Power Management 00:07:23.133 ================ 00:07:23.133 Number of Power States: 1 00:07:23.133 Current Power State: Power State #0 00:07:23.133 Power State #0: 00:07:23.133 Max Power: 25.00 W 00:07:23.133 Non-Operational State: Operational 00:07:23.133 Entry Latency: 16 microseconds 00:07:23.133 Exit Latency: 4 microseconds 00:07:23.133 Relative Read Throughput: 0 00:07:23.133 Relative Read Latency: 0 00:07:23.133 Relative Write Throughput: 0 00:07:23.133 Relative Write Latency: 0 00:07:23.133 Idle Power: Not Reported 00:07:23.133 Active Power: Not Reported 00:07:23.133 Non-Operational Permissive Mode: Not Supported 00:07:23.133 00:07:23.133 Health Information 00:07:23.133 ================== 00:07:23.133 Critical Warnings: 00:07:23.133 Available Spare Space: OK 00:07:23.133 Temperature: OK 00:07:23.133 Device Reliability: OK 00:07:23.133 Read Only: No 00:07:23.133 Volatile Memory Backup: OK 00:07:23.133 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.133 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.133 Available Spare: 0% 00:07:23.133 Available Spare Threshold: 0% 00:07:23.133 Life Percentage Used: 0% 00:07:23.133 Data Units Read: 820 00:07:23.133 Data Units Written: 749 00:07:23.133 Host Read Commands: 36066 00:07:23.133 Host Write Commands: 35489 00:07:23.133 Controller Busy Time: 0 minutes 00:07:23.133 Power Cycles: 0 00:07:23.133 Power On Hours: 0 hours 00:07:23.133 Unsafe Shutdowns: 0 00:07:23.133 Unrecoverable Media Errors: 0 00:07:23.133 Lifetime Error Log Entries: 0 00:07:23.133 Warning Temperature Time: 0 minutes 00:07:23.133 Critical Temperature Time: 0 minutes 00:07:23.133 00:07:23.133 Number of Queues 00:07:23.133 ================ 00:07:23.133 Number of I/O Submission Queues: 64 00:07:23.133 Number of I/O Completion Queues: 64 00:07:23.133 00:07:23.133 ZNS Specific Controller Data 00:07:23.133 ============================ 00:07:23.133 Zone Append Size Limit: 0 00:07:23.133 00:07:23.133 00:07:23.133 Active Namespaces 00:07:23.133 ================= 00:07:23.133 Namespace ID:1 00:07:23.133 Error Recovery Timeout: Unlimited 00:07:23.133 Command Set Identifier: NVM (00h) 00:07:23.133 Deallocate: Supported 00:07:23.133 Deallocated/Unwritten Error: Supported 00:07:23.133 Deallocated Read Value: All 0x00 00:07:23.133 Deallocate in Write Zeroes: Not Supported 00:07:23.133 Deallocated Guard Field: 0xFFFF 00:07:23.133 Flush: Supported 00:07:23.133 Reservation: Not Supported 00:07:23.133 Namespace Sharing Capabilities: Multiple Controllers 00:07:23.133 Size (in LBAs): 262144 (1GiB) 00:07:23.133 Capacity (in LBAs): 262144 (1GiB) 00:07:23.133 Utilization (in LBAs): 262144 (1GiB) 00:07:23.133 Thin Provisioning: Not Supported 00:07:23.133 Per-NS Atomic Units: No 00:07:23.133 Maximum Single Source Range Length: 128 00:07:23.133 Maximum Copy Length: 128 00:07:23.133 Maximum Source Range Count: 128 00:07:23.133 NGUID/EUI64 Never Reused: No 00:07:23.133 Namespace Write Protected: No 00:07:23.133 Endurance group ID: 1 00:07:23.133 Number of LBA Formats: 8 00:07:23.133 Current LBA Format: LBA Format #04 00:07:23.133 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.133 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.133 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.133 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.133 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.133 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.133 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.133 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.133 00:07:23.133 Get Feature FDP: 00:07:23.133 ================ 00:07:23.133 Enabled: Yes 00:07:23.133 FDP configuration index: 0 00:07:23.133 00:07:23.133 FDP configurations log page 00:07:23.133 =========================== 00:07:23.133 Number of FDP configurations: 1 00:07:23.133 Version: 0 00:07:23.133 Size: 112 00:07:23.133 FDP Configuration Descriptor: 0 00:07:23.133 Descriptor Size: 96 00:07:23.133 Reclaim Group Identifier format: 2 00:07:23.133 FDP Volatile Write Cache: Not Present 00:07:23.133 FDP Configuration: Valid 00:07:23.133 Vendor Specific Size: 0 00:07:23.133 Number of Reclaim Groups: 2 00:07:23.133 Number of Recalim Unit Handles: 8 00:07:23.133 Max Placement Identifiers: 128 00:07:23.133 Number of Namespaces Suppprted: 256 00:07:23.133 Reclaim unit Nominal Size: 6000000 bytes 00:07:23.133 Estimated Reclaim Unit Time Limit: Not Reported 00:07:23.133 RUH Desc #000: RUH Type: Initially Isolated 00:07:23.133 RUH Desc #001: RUH Type: Initially Isolated 00:07:23.133 RUH Desc #002: RUH Type: Initially Isolated 00:07:23.133 RUH Desc #003: RUH Type: Initially Isolated 00:07:23.133 RUH Desc #004: RUH Type: Initially Isolated 00:07:23.133 RUH Desc #005: RUH Type: Initially Isolated 00:07:23.133 RUH Desc #006: RUH Type: Initially Isolated 00:07:23.133 RUH Desc #007: RUH Type: Initially Isolated 00:07:23.133 00:07:23.133 FDP reclaim unit handle usage log page 00:07:23.133 ====================================== 00:07:23.133 Number of Reclaim Unit Handles: 8 00:07:23.133 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:23.133 RUH Usage Desc #001: RUH Attributes: Unused 00:07:23.133 RUH Usage Desc #002: RUH Attributes: Unused 00:07:23.133 RUH Usage Desc #003: RUH Attributes: Unused 00:07:23.133 RUH Usage Desc #004: RUH Attributes: Unused 00:07:23.133 RUH Usage Desc #005: RUH Attributes: Unused 00:07:23.133 RUH Usage Desc #006: RUH Attributes: Unused 00:07:23.133 RUH Usage Desc #007: RUH Attributes: Unused 00:07:23.133 00:07:23.133 FDP statistics log page 00:07:23.133 ======================= 00:07:23.133 Host bytes with metadata written: 484089856 00:07:23.133 Media[2024-12-15 02:02:47.807586] nvme_ctrlr.c:3641:nvme_ctrlr_remove_inactive_proc: *ERROR*: [0000:00:12.0, 0] process 64603 terminated unexpected 00:07:23.133 bytes with metadata written: 484143104 00:07:23.133 Media bytes erased: 0 00:07:23.133 00:07:23.133 FDP events log page 00:07:23.133 =================== 00:07:23.133 Number of FDP events: 0 00:07:23.133 00:07:23.133 NVM Specific Namespace Data 00:07:23.134 =========================== 00:07:23.134 Logical Block Storage Tag Mask: 0 00:07:23.134 Protection Information Capabilities: 00:07:23.134 16b Guard Protection Information Storage Tag Support: No 00:07:23.134 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.134 Storage Tag Check Read Support: No 00:07:23.134 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.134 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.134 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.134 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.134 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.134 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.134 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.134 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.134 ===================================================== 00:07:23.134 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:23.134 ===================================================== 00:07:23.134 Controller Capabilities/Features 00:07:23.134 ================================ 00:07:23.134 Vendor ID: 1b36 00:07:23.134 Subsystem Vendor ID: 1af4 00:07:23.134 Serial Number: 12342 00:07:23.134 Model Number: QEMU NVMe Ctrl 00:07:23.134 Firmware Version: 8.0.0 00:07:23.134 Recommended Arb Burst: 6 00:07:23.134 IEEE OUI Identifier: 00 54 52 00:07:23.134 Multi-path I/O 00:07:23.134 May have multiple subsystem ports: No 00:07:23.134 May have multiple controllers: No 00:07:23.134 Associated with SR-IOV VF: No 00:07:23.134 Max Data Transfer Size: 524288 00:07:23.134 Max Number of Namespaces: 256 00:07:23.134 Max Number of I/O Queues: 64 00:07:23.134 NVMe Specification Version (VS): 1.4 00:07:23.134 NVMe Specification Version (Identify): 1.4 00:07:23.134 Maximum Queue Entries: 2048 00:07:23.134 Contiguous Queues Required: Yes 00:07:23.134 Arbitration Mechanisms Supported 00:07:23.134 Weighted Round Robin: Not Supported 00:07:23.134 Vendor Specific: Not Supported 00:07:23.134 Reset Timeout: 7500 ms 00:07:23.134 Doorbell Stride: 4 bytes 00:07:23.134 NVM Subsystem Reset: Not Supported 00:07:23.134 Command Sets Supported 00:07:23.134 NVM Command Set: Supported 00:07:23.134 Boot Partition: Not Supported 00:07:23.134 Memory Page Size Minimum: 4096 bytes 00:07:23.134 Memory Page Size Maximum: 65536 bytes 00:07:23.134 Persistent Memory Region: Not Supported 00:07:23.134 Optional Asynchronous Events Supported 00:07:23.134 Namespace Attribute Notices: Supported 00:07:23.134 Firmware Activation Notices: Not Supported 00:07:23.134 ANA Change Notices: Not Supported 00:07:23.134 PLE Aggregate Log Change Notices: Not Supported 00:07:23.134 LBA Status Info Alert Notices: Not Supported 00:07:23.134 EGE Aggregate Log Change Notices: Not Supported 00:07:23.134 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.134 Zone Descriptor Change Notices: Not Supported 00:07:23.134 Discovery Log Change Notices: Not Supported 00:07:23.134 Controller Attributes 00:07:23.134 128-bit Host Identifier: Not Supported 00:07:23.134 Non-Operational Permissive Mode: Not Supported 00:07:23.134 NVM Sets: Not Supported 00:07:23.134 Read Recovery Levels: Not Supported 00:07:23.134 Endurance Groups: Not Supported 00:07:23.134 Predictable Latency Mode: Not Supported 00:07:23.134 Traffic Based Keep ALive: Not Supported 00:07:23.134 Namespace Granularity: Not Supported 00:07:23.134 SQ Associations: Not Supported 00:07:23.134 UUID List: Not Supported 00:07:23.134 Multi-Domain Subsystem: Not Supported 00:07:23.134 Fixed Capacity Management: Not Supported 00:07:23.134 Variable Capacity Management: Not Supported 00:07:23.134 Delete Endurance Group: Not Supported 00:07:23.134 Delete NVM Set: Not Supported 00:07:23.134 Extended LBA Formats Supported: Supported 00:07:23.134 Flexible Data Placement Supported: Not Supported 00:07:23.134 00:07:23.134 Controller Memory Buffer Support 00:07:23.134 ================================ 00:07:23.134 Supported: No 00:07:23.134 00:07:23.134 Persistent Memory Region Support 00:07:23.134 ================================ 00:07:23.134 Supported: No 00:07:23.134 00:07:23.134 Admin Command Set Attributes 00:07:23.134 ============================ 00:07:23.134 Security Send/Receive: Not Supported 00:07:23.134 Format NVM: Supported 00:07:23.134 Firmware Activate/Download: Not Supported 00:07:23.134 Namespace Management: Supported 00:07:23.134 Device Self-Test: Not Supported 00:07:23.134 Directives: Supported 00:07:23.134 NVMe-MI: Not Supported 00:07:23.134 Virtualization Management: Not Supported 00:07:23.134 Doorbell Buffer Config: Supported 00:07:23.134 Get LBA Status Capability: Not Supported 00:07:23.134 Command & Feature Lockdown Capability: Not Supported 00:07:23.134 Abort Command Limit: 4 00:07:23.134 Async Event Request Limit: 4 00:07:23.134 Number of Firmware Slots: N/A 00:07:23.134 Firmware Slot 1 Read-Only: N/A 00:07:23.134 Firmware Activation Without Reset: N/A 00:07:23.134 Multiple Update Detection Support: N/A 00:07:23.134 Firmware Update Granularity: No Information Provided 00:07:23.134 Per-Namespace SMART Log: Yes 00:07:23.134 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.134 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:23.134 Command Effects Log Page: Supported 00:07:23.134 Get Log Page Extended Data: Supported 00:07:23.134 Telemetry Log Pages: Not Supported 00:07:23.134 Persistent Event Log Pages: Not Supported 00:07:23.134 Supported Log Pages Log Page: May Support 00:07:23.134 Commands Supported & Effects Log Page: Not Supported 00:07:23.134 Feature Identifiers & Effects Log Page:May Support 00:07:23.134 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.134 Data Area 4 for Telemetry Log: Not Supported 00:07:23.134 Error Log Page Entries Supported: 1 00:07:23.134 Keep Alive: Not Supported 00:07:23.134 00:07:23.134 NVM Command Set Attributes 00:07:23.134 ========================== 00:07:23.134 Submission Queue Entry Size 00:07:23.134 Max: 64 00:07:23.134 Min: 64 00:07:23.134 Completion Queue Entry Size 00:07:23.134 Max: 16 00:07:23.134 Min: 16 00:07:23.134 Number of Namespaces: 256 00:07:23.134 Compare Command: Supported 00:07:23.134 Write Uncorrectable Command: Not Supported 00:07:23.134 Dataset Management Command: Supported 00:07:23.134 Write Zeroes Command: Supported 00:07:23.134 Set Features Save Field: Supported 00:07:23.134 Reservations: Not Supported 00:07:23.134 Timestamp: Supported 00:07:23.134 Copy: Supported 00:07:23.134 Volatile Write Cache: Present 00:07:23.134 Atomic Write Unit (Normal): 1 00:07:23.134 Atomic Write Unit (PFail): 1 00:07:23.134 Atomic Compare & Write Unit: 1 00:07:23.134 Fused Compare & Write: Not Supported 00:07:23.134 Scatter-Gather List 00:07:23.134 SGL Command Set: Supported 00:07:23.134 SGL Keyed: Not Supported 00:07:23.134 SGL Bit Bucket Descriptor: Not Supported 00:07:23.134 SGL Metadata Pointer: Not Supported 00:07:23.134 Oversized SGL: Not Supported 00:07:23.134 SGL Metadata Address: Not Supported 00:07:23.134 SGL Offset: Not Supported 00:07:23.134 Transport SGL Data Block: Not Supported 00:07:23.134 Replay Protected Memory Block: Not Supported 00:07:23.134 00:07:23.134 Firmware Slot Information 00:07:23.134 ========================= 00:07:23.134 Active slot: 1 00:07:23.134 Slot 1 Firmware Revision: 1.0 00:07:23.134 00:07:23.134 00:07:23.134 Commands Supported and Effects 00:07:23.134 ============================== 00:07:23.134 Admin Commands 00:07:23.134 -------------- 00:07:23.135 Delete I/O Submission Queue (00h): Supported 00:07:23.135 Create I/O Submission Queue (01h): Supported 00:07:23.135 Get Log Page (02h): Supported 00:07:23.135 Delete I/O Completion Queue (04h): Supported 00:07:23.135 Create I/O Completion Queue (05h): Supported 00:07:23.135 Identify (06h): Supported 00:07:23.135 Abort (08h): Supported 00:07:23.135 Set Features (09h): Supported 00:07:23.135 Get Features (0Ah): Supported 00:07:23.135 Asynchronous Event Request (0Ch): Supported 00:07:23.135 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.135 Directive Send (19h): Supported 00:07:23.135 Directive Receive (1Ah): Supported 00:07:23.135 Virtualization Management (1Ch): Supported 00:07:23.135 Doorbell Buffer Config (7Ch): Supported 00:07:23.135 Format NVM (80h): Supported LBA-Change 00:07:23.135 I/O Commands 00:07:23.135 ------------ 00:07:23.135 Flush (00h): Supported LBA-Change 00:07:23.135 Write (01h): Supported LBA-Change 00:07:23.135 Read (02h): Supported 00:07:23.135 Compare (05h): Supported 00:07:23.135 Write Zeroes (08h): Supported LBA-Change 00:07:23.135 Dataset Management (09h): Supported LBA-Change 00:07:23.135 Unknown (0Ch): Supported 00:07:23.135 Unknown (12h): Supported 00:07:23.135 Copy (19h): Supported LBA-Change 00:07:23.135 Unknown (1Dh): Supported LBA-Change 00:07:23.135 00:07:23.135 Error Log 00:07:23.135 ========= 00:07:23.135 00:07:23.135 Arbitration 00:07:23.135 =========== 00:07:23.135 Arbitration Burst: no limit 00:07:23.135 00:07:23.135 Power Management 00:07:23.135 ================ 00:07:23.135 Number of Power States: 1 00:07:23.135 Current Power State: Power State #0 00:07:23.135 Power State #0: 00:07:23.135 Max Power: 25.00 W 00:07:23.135 Non-Operational State: Operational 00:07:23.135 Entry Latency: 16 microseconds 00:07:23.135 Exit Latency: 4 microseconds 00:07:23.135 Relative Read Throughput: 0 00:07:23.135 Relative Read Latency: 0 00:07:23.135 Relative Write Throughput: 0 00:07:23.135 Relative Write Latency: 0 00:07:23.135 Idle Power: Not Reported 00:07:23.135 Active Power: Not Reported 00:07:23.135 Non-Operational Permissive Mode: Not Supported 00:07:23.135 00:07:23.135 Health Information 00:07:23.135 ================== 00:07:23.135 Critical Warnings: 00:07:23.135 Available Spare Space: OK 00:07:23.135 Temperature: OK 00:07:23.135 Device Reliability: OK 00:07:23.135 Read Only: No 00:07:23.135 Volatile Memory Backup: OK 00:07:23.135 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.135 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.135 Available Spare: 0% 00:07:23.135 Available Spare Threshold: 0% 00:07:23.135 Life Percentage Used: 0% 00:07:23.135 Data Units Read: 2123 00:07:23.135 Data Units Written: 1910 00:07:23.135 Host Read Commands: 105144 00:07:23.135 Host Write Commands: 103414 00:07:23.135 Controller Busy Time: 0 minutes 00:07:23.135 Power Cycles: 0 00:07:23.135 Power On Hours: 0 hours 00:07:23.135 Unsafe Shutdowns: 0 00:07:23.135 Unrecoverable Media Errors: 0 00:07:23.135 Lifetime Error Log Entries: 0 00:07:23.135 Warning Temperature Time: 0 minutes 00:07:23.135 Critical Temperature Time: 0 minutes 00:07:23.135 00:07:23.135 Number of Queues 00:07:23.135 ================ 00:07:23.135 Number of I/O Submission Queues: 64 00:07:23.135 Number of I/O Completion Queues: 64 00:07:23.135 00:07:23.135 ZNS Specific Controller Data 00:07:23.135 ============================ 00:07:23.135 Zone Append Size Limit: 0 00:07:23.135 00:07:23.135 00:07:23.135 Active Namespaces 00:07:23.135 ================= 00:07:23.135 Namespace ID:1 00:07:23.135 Error Recovery Timeout: Unlimited 00:07:23.135 Command Set Identifier: NVM (00h) 00:07:23.135 Deallocate: Supported 00:07:23.135 Deallocated/Unwritten Error: Supported 00:07:23.135 Deallocated Read Value: All 0x00 00:07:23.135 Deallocate in Write Zeroes: Not Supported 00:07:23.135 Deallocated Guard Field: 0xFFFF 00:07:23.135 Flush: Supported 00:07:23.135 Reservation: Not Supported 00:07:23.135 Namespace Sharing Capabilities: Private 00:07:23.135 Size (in LBAs): 1048576 (4GiB) 00:07:23.135 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.135 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.135 Thin Provisioning: Not Supported 00:07:23.135 Per-NS Atomic Units: No 00:07:23.135 Maximum Single Source Range Length: 128 00:07:23.135 Maximum Copy Length: 128 00:07:23.135 Maximum Source Range Count: 128 00:07:23.135 NGUID/EUI64 Never Reused: No 00:07:23.135 Namespace Write Protected: No 00:07:23.135 Number of LBA Formats: 8 00:07:23.135 Current LBA Format: LBA Format #04 00:07:23.135 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.135 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.135 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.135 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.135 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.135 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.135 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.135 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.135 00:07:23.135 NVM Specific Namespace Data 00:07:23.135 =========================== 00:07:23.135 Logical Block Storage Tag Mask: 0 00:07:23.135 Protection Information Capabilities: 00:07:23.135 16b Guard Protection Information Storage Tag Support: No 00:07:23.135 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.135 Storage Tag Check Read Support: No 00:07:23.135 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Namespace ID:2 00:07:23.135 Error Recovery Timeout: Unlimited 00:07:23.135 Command Set Identifier: NVM (00h) 00:07:23.135 Deallocate: Supported 00:07:23.135 Deallocated/Unwritten Error: Supported 00:07:23.135 Deallocated Read Value: All 0x00 00:07:23.135 Deallocate in Write Zeroes: Not Supported 00:07:23.135 Deallocated Guard Field: 0xFFFF 00:07:23.135 Flush: Supported 00:07:23.135 Reservation: Not Supported 00:07:23.135 Namespace Sharing Capabilities: Private 00:07:23.135 Size (in LBAs): 1048576 (4GiB) 00:07:23.135 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.135 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.135 Thin Provisioning: Not Supported 00:07:23.135 Per-NS Atomic Units: No 00:07:23.135 Maximum Single Source Range Length: 128 00:07:23.135 Maximum Copy Length: 128 00:07:23.135 Maximum Source Range Count: 128 00:07:23.135 NGUID/EUI64 Never Reused: No 00:07:23.135 Namespace Write Protected: No 00:07:23.135 Number of LBA Formats: 8 00:07:23.135 Current LBA Format: LBA Format #04 00:07:23.135 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.135 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.135 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.135 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.135 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.135 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.135 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.135 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.135 00:07:23.135 NVM Specific Namespace Data 00:07:23.135 =========================== 00:07:23.135 Logical Block Storage Tag Mask: 0 00:07:23.135 Protection Information Capabilities: 00:07:23.135 16b Guard Protection Information Storage Tag Support: No 00:07:23.135 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.135 Storage Tag Check Read Support: No 00:07:23.135 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.135 Namespace ID:3 00:07:23.135 Error Recovery Timeout: Unlimited 00:07:23.135 Command Set Identifier: NVM (00h) 00:07:23.135 Deallocate: Supported 00:07:23.135 Deallocated/Unwritten Error: Supported 00:07:23.135 Deallocated Read Value: All 0x00 00:07:23.135 Deallocate in Write Zeroes: Not Supported 00:07:23.135 Deallocated Guard Field: 0xFFFF 00:07:23.135 Flush: Supported 00:07:23.135 Reservation: Not Supported 00:07:23.135 Namespace Sharing Capabilities: Private 00:07:23.135 Size (in LBAs): 1048576 (4GiB) 00:07:23.135 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.136 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.136 Thin Provisioning: Not Supported 00:07:23.136 Per-NS Atomic Units: No 00:07:23.136 Maximum Single Source Range Length: 128 00:07:23.136 Maximum Copy Length: 128 00:07:23.136 Maximum Source Range Count: 128 00:07:23.136 NGUID/EUI64 Never Reused: No 00:07:23.136 Namespace Write Protected: No 00:07:23.136 Number of LBA Formats: 8 00:07:23.136 Current LBA Format: LBA Format #04 00:07:23.136 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.136 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.136 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.136 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.136 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.136 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.136 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.136 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.136 00:07:23.136 NVM Specific Namespace Data 00:07:23.136 =========================== 00:07:23.136 Logical Block Storage Tag Mask: 0 00:07:23.136 Protection Information Capabilities: 00:07:23.136 16b Guard Protection Information Storage Tag Support: No 00:07:23.136 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.136 Storage Tag Check Read Support: No 00:07:23.136 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.136 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.136 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.136 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.136 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.136 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.136 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.136 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.136 02:02:47 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:23.136 02:02:47 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' -i 0 00:07:23.395 ===================================================== 00:07:23.395 NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:23.395 ===================================================== 00:07:23.395 Controller Capabilities/Features 00:07:23.395 ================================ 00:07:23.395 Vendor ID: 1b36 00:07:23.395 Subsystem Vendor ID: 1af4 00:07:23.395 Serial Number: 12340 00:07:23.395 Model Number: QEMU NVMe Ctrl 00:07:23.395 Firmware Version: 8.0.0 00:07:23.395 Recommended Arb Burst: 6 00:07:23.395 IEEE OUI Identifier: 00 54 52 00:07:23.395 Multi-path I/O 00:07:23.395 May have multiple subsystem ports: No 00:07:23.395 May have multiple controllers: No 00:07:23.395 Associated with SR-IOV VF: No 00:07:23.395 Max Data Transfer Size: 524288 00:07:23.395 Max Number of Namespaces: 256 00:07:23.395 Max Number of I/O Queues: 64 00:07:23.395 NVMe Specification Version (VS): 1.4 00:07:23.395 NVMe Specification Version (Identify): 1.4 00:07:23.395 Maximum Queue Entries: 2048 00:07:23.395 Contiguous Queues Required: Yes 00:07:23.395 Arbitration Mechanisms Supported 00:07:23.395 Weighted Round Robin: Not Supported 00:07:23.395 Vendor Specific: Not Supported 00:07:23.395 Reset Timeout: 7500 ms 00:07:23.395 Doorbell Stride: 4 bytes 00:07:23.395 NVM Subsystem Reset: Not Supported 00:07:23.395 Command Sets Supported 00:07:23.395 NVM Command Set: Supported 00:07:23.395 Boot Partition: Not Supported 00:07:23.395 Memory Page Size Minimum: 4096 bytes 00:07:23.395 Memory Page Size Maximum: 65536 bytes 00:07:23.395 Persistent Memory Region: Not Supported 00:07:23.395 Optional Asynchronous Events Supported 00:07:23.395 Namespace Attribute Notices: Supported 00:07:23.395 Firmware Activation Notices: Not Supported 00:07:23.395 ANA Change Notices: Not Supported 00:07:23.395 PLE Aggregate Log Change Notices: Not Supported 00:07:23.395 LBA Status Info Alert Notices: Not Supported 00:07:23.395 EGE Aggregate Log Change Notices: Not Supported 00:07:23.395 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.395 Zone Descriptor Change Notices: Not Supported 00:07:23.395 Discovery Log Change Notices: Not Supported 00:07:23.395 Controller Attributes 00:07:23.395 128-bit Host Identifier: Not Supported 00:07:23.395 Non-Operational Permissive Mode: Not Supported 00:07:23.395 NVM Sets: Not Supported 00:07:23.395 Read Recovery Levels: Not Supported 00:07:23.395 Endurance Groups: Not Supported 00:07:23.395 Predictable Latency Mode: Not Supported 00:07:23.395 Traffic Based Keep ALive: Not Supported 00:07:23.395 Namespace Granularity: Not Supported 00:07:23.395 SQ Associations: Not Supported 00:07:23.395 UUID List: Not Supported 00:07:23.396 Multi-Domain Subsystem: Not Supported 00:07:23.396 Fixed Capacity Management: Not Supported 00:07:23.396 Variable Capacity Management: Not Supported 00:07:23.396 Delete Endurance Group: Not Supported 00:07:23.396 Delete NVM Set: Not Supported 00:07:23.396 Extended LBA Formats Supported: Supported 00:07:23.396 Flexible Data Placement Supported: Not Supported 00:07:23.396 00:07:23.396 Controller Memory Buffer Support 00:07:23.396 ================================ 00:07:23.396 Supported: No 00:07:23.396 00:07:23.396 Persistent Memory Region Support 00:07:23.396 ================================ 00:07:23.396 Supported: No 00:07:23.396 00:07:23.396 Admin Command Set Attributes 00:07:23.396 ============================ 00:07:23.396 Security Send/Receive: Not Supported 00:07:23.396 Format NVM: Supported 00:07:23.396 Firmware Activate/Download: Not Supported 00:07:23.396 Namespace Management: Supported 00:07:23.396 Device Self-Test: Not Supported 00:07:23.396 Directives: Supported 00:07:23.396 NVMe-MI: Not Supported 00:07:23.396 Virtualization Management: Not Supported 00:07:23.396 Doorbell Buffer Config: Supported 00:07:23.396 Get LBA Status Capability: Not Supported 00:07:23.396 Command & Feature Lockdown Capability: Not Supported 00:07:23.396 Abort Command Limit: 4 00:07:23.396 Async Event Request Limit: 4 00:07:23.396 Number of Firmware Slots: N/A 00:07:23.396 Firmware Slot 1 Read-Only: N/A 00:07:23.396 Firmware Activation Without Reset: N/A 00:07:23.396 Multiple Update Detection Support: N/A 00:07:23.396 Firmware Update Granularity: No Information Provided 00:07:23.396 Per-Namespace SMART Log: Yes 00:07:23.396 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.396 Subsystem NQN: nqn.2019-08.org.qemu:12340 00:07:23.396 Command Effects Log Page: Supported 00:07:23.396 Get Log Page Extended Data: Supported 00:07:23.396 Telemetry Log Pages: Not Supported 00:07:23.396 Persistent Event Log Pages: Not Supported 00:07:23.396 Supported Log Pages Log Page: May Support 00:07:23.396 Commands Supported & Effects Log Page: Not Supported 00:07:23.396 Feature Identifiers & Effects Log Page:May Support 00:07:23.396 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.396 Data Area 4 for Telemetry Log: Not Supported 00:07:23.396 Error Log Page Entries Supported: 1 00:07:23.396 Keep Alive: Not Supported 00:07:23.396 00:07:23.396 NVM Command Set Attributes 00:07:23.396 ========================== 00:07:23.396 Submission Queue Entry Size 00:07:23.396 Max: 64 00:07:23.396 Min: 64 00:07:23.396 Completion Queue Entry Size 00:07:23.396 Max: 16 00:07:23.396 Min: 16 00:07:23.396 Number of Namespaces: 256 00:07:23.396 Compare Command: Supported 00:07:23.396 Write Uncorrectable Command: Not Supported 00:07:23.396 Dataset Management Command: Supported 00:07:23.396 Write Zeroes Command: Supported 00:07:23.396 Set Features Save Field: Supported 00:07:23.396 Reservations: Not Supported 00:07:23.396 Timestamp: Supported 00:07:23.396 Copy: Supported 00:07:23.396 Volatile Write Cache: Present 00:07:23.396 Atomic Write Unit (Normal): 1 00:07:23.396 Atomic Write Unit (PFail): 1 00:07:23.396 Atomic Compare & Write Unit: 1 00:07:23.396 Fused Compare & Write: Not Supported 00:07:23.396 Scatter-Gather List 00:07:23.396 SGL Command Set: Supported 00:07:23.396 SGL Keyed: Not Supported 00:07:23.396 SGL Bit Bucket Descriptor: Not Supported 00:07:23.396 SGL Metadata Pointer: Not Supported 00:07:23.396 Oversized SGL: Not Supported 00:07:23.396 SGL Metadata Address: Not Supported 00:07:23.396 SGL Offset: Not Supported 00:07:23.396 Transport SGL Data Block: Not Supported 00:07:23.396 Replay Protected Memory Block: Not Supported 00:07:23.396 00:07:23.396 Firmware Slot Information 00:07:23.396 ========================= 00:07:23.396 Active slot: 1 00:07:23.396 Slot 1 Firmware Revision: 1.0 00:07:23.396 00:07:23.396 00:07:23.396 Commands Supported and Effects 00:07:23.396 ============================== 00:07:23.396 Admin Commands 00:07:23.396 -------------- 00:07:23.396 Delete I/O Submission Queue (00h): Supported 00:07:23.396 Create I/O Submission Queue (01h): Supported 00:07:23.396 Get Log Page (02h): Supported 00:07:23.396 Delete I/O Completion Queue (04h): Supported 00:07:23.396 Create I/O Completion Queue (05h): Supported 00:07:23.396 Identify (06h): Supported 00:07:23.396 Abort (08h): Supported 00:07:23.396 Set Features (09h): Supported 00:07:23.396 Get Features (0Ah): Supported 00:07:23.396 Asynchronous Event Request (0Ch): Supported 00:07:23.396 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.396 Directive Send (19h): Supported 00:07:23.396 Directive Receive (1Ah): Supported 00:07:23.396 Virtualization Management (1Ch): Supported 00:07:23.396 Doorbell Buffer Config (7Ch): Supported 00:07:23.396 Format NVM (80h): Supported LBA-Change 00:07:23.396 I/O Commands 00:07:23.396 ------------ 00:07:23.396 Flush (00h): Supported LBA-Change 00:07:23.396 Write (01h): Supported LBA-Change 00:07:23.396 Read (02h): Supported 00:07:23.396 Compare (05h): Supported 00:07:23.396 Write Zeroes (08h): Supported LBA-Change 00:07:23.396 Dataset Management (09h): Supported LBA-Change 00:07:23.396 Unknown (0Ch): Supported 00:07:23.396 Unknown (12h): Supported 00:07:23.396 Copy (19h): Supported LBA-Change 00:07:23.396 Unknown (1Dh): Supported LBA-Change 00:07:23.396 00:07:23.396 Error Log 00:07:23.396 ========= 00:07:23.396 00:07:23.396 Arbitration 00:07:23.396 =========== 00:07:23.396 Arbitration Burst: no limit 00:07:23.396 00:07:23.396 Power Management 00:07:23.396 ================ 00:07:23.396 Number of Power States: 1 00:07:23.396 Current Power State: Power State #0 00:07:23.396 Power State #0: 00:07:23.396 Max Power: 25.00 W 00:07:23.396 Non-Operational State: Operational 00:07:23.396 Entry Latency: 16 microseconds 00:07:23.396 Exit Latency: 4 microseconds 00:07:23.396 Relative Read Throughput: 0 00:07:23.396 Relative Read Latency: 0 00:07:23.396 Relative Write Throughput: 0 00:07:23.396 Relative Write Latency: 0 00:07:23.396 Idle Power: Not Reported 00:07:23.396 Active Power: Not Reported 00:07:23.396 Non-Operational Permissive Mode: Not Supported 00:07:23.396 00:07:23.396 Health Information 00:07:23.396 ================== 00:07:23.396 Critical Warnings: 00:07:23.396 Available Spare Space: OK 00:07:23.396 Temperature: OK 00:07:23.396 Device Reliability: OK 00:07:23.396 Read Only: No 00:07:23.396 Volatile Memory Backup: OK 00:07:23.396 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.396 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.396 Available Spare: 0% 00:07:23.396 Available Spare Threshold: 0% 00:07:23.396 Life Percentage Used: 0% 00:07:23.396 Data Units Read: 651 00:07:23.396 Data Units Written: 579 00:07:23.396 Host Read Commands: 34382 00:07:23.396 Host Write Commands: 34168 00:07:23.396 Controller Busy Time: 0 minutes 00:07:23.396 Power Cycles: 0 00:07:23.396 Power On Hours: 0 hours 00:07:23.396 Unsafe Shutdowns: 0 00:07:23.396 Unrecoverable Media Errors: 0 00:07:23.396 Lifetime Error Log Entries: 0 00:07:23.396 Warning Temperature Time: 0 minutes 00:07:23.396 Critical Temperature Time: 0 minutes 00:07:23.396 00:07:23.396 Number of Queues 00:07:23.396 ================ 00:07:23.396 Number of I/O Submission Queues: 64 00:07:23.396 Number of I/O Completion Queues: 64 00:07:23.396 00:07:23.396 ZNS Specific Controller Data 00:07:23.396 ============================ 00:07:23.396 Zone Append Size Limit: 0 00:07:23.396 00:07:23.396 00:07:23.396 Active Namespaces 00:07:23.396 ================= 00:07:23.396 Namespace ID:1 00:07:23.396 Error Recovery Timeout: Unlimited 00:07:23.396 Command Set Identifier: NVM (00h) 00:07:23.396 Deallocate: Supported 00:07:23.396 Deallocated/Unwritten Error: Supported 00:07:23.396 Deallocated Read Value: All 0x00 00:07:23.396 Deallocate in Write Zeroes: Not Supported 00:07:23.396 Deallocated Guard Field: 0xFFFF 00:07:23.396 Flush: Supported 00:07:23.396 Reservation: Not Supported 00:07:23.396 Metadata Transferred as: Separate Metadata Buffer 00:07:23.396 Namespace Sharing Capabilities: Private 00:07:23.396 Size (in LBAs): 1548666 (5GiB) 00:07:23.396 Capacity (in LBAs): 1548666 (5GiB) 00:07:23.396 Utilization (in LBAs): 1548666 (5GiB) 00:07:23.396 Thin Provisioning: Not Supported 00:07:23.396 Per-NS Atomic Units: No 00:07:23.396 Maximum Single Source Range Length: 128 00:07:23.396 Maximum Copy Length: 128 00:07:23.396 Maximum Source Range Count: 128 00:07:23.396 NGUID/EUI64 Never Reused: No 00:07:23.396 Namespace Write Protected: No 00:07:23.396 Number of LBA Formats: 8 00:07:23.396 Current LBA Format: LBA Format #07 00:07:23.396 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.396 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.396 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.397 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.397 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.397 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.397 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.397 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.397 00:07:23.397 NVM Specific Namespace Data 00:07:23.397 =========================== 00:07:23.397 Logical Block Storage Tag Mask: 0 00:07:23.397 Protection Information Capabilities: 00:07:23.397 16b Guard Protection Information Storage Tag Support: No 00:07:23.397 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.397 Storage Tag Check Read Support: No 00:07:23.397 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.397 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.397 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.397 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.397 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.397 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.397 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.397 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.397 02:02:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:23.397 02:02:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' -i 0 00:07:23.655 ===================================================== 00:07:23.655 NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:23.655 ===================================================== 00:07:23.655 Controller Capabilities/Features 00:07:23.655 ================================ 00:07:23.655 Vendor ID: 1b36 00:07:23.655 Subsystem Vendor ID: 1af4 00:07:23.655 Serial Number: 12341 00:07:23.655 Model Number: QEMU NVMe Ctrl 00:07:23.655 Firmware Version: 8.0.0 00:07:23.655 Recommended Arb Burst: 6 00:07:23.655 IEEE OUI Identifier: 00 54 52 00:07:23.655 Multi-path I/O 00:07:23.655 May have multiple subsystem ports: No 00:07:23.655 May have multiple controllers: No 00:07:23.655 Associated with SR-IOV VF: No 00:07:23.655 Max Data Transfer Size: 524288 00:07:23.655 Max Number of Namespaces: 256 00:07:23.655 Max Number of I/O Queues: 64 00:07:23.655 NVMe Specification Version (VS): 1.4 00:07:23.655 NVMe Specification Version (Identify): 1.4 00:07:23.655 Maximum Queue Entries: 2048 00:07:23.655 Contiguous Queues Required: Yes 00:07:23.655 Arbitration Mechanisms Supported 00:07:23.655 Weighted Round Robin: Not Supported 00:07:23.655 Vendor Specific: Not Supported 00:07:23.655 Reset Timeout: 7500 ms 00:07:23.655 Doorbell Stride: 4 bytes 00:07:23.655 NVM Subsystem Reset: Not Supported 00:07:23.655 Command Sets Supported 00:07:23.655 NVM Command Set: Supported 00:07:23.656 Boot Partition: Not Supported 00:07:23.656 Memory Page Size Minimum: 4096 bytes 00:07:23.656 Memory Page Size Maximum: 65536 bytes 00:07:23.656 Persistent Memory Region: Not Supported 00:07:23.656 Optional Asynchronous Events Supported 00:07:23.656 Namespace Attribute Notices: Supported 00:07:23.656 Firmware Activation Notices: Not Supported 00:07:23.656 ANA Change Notices: Not Supported 00:07:23.656 PLE Aggregate Log Change Notices: Not Supported 00:07:23.656 LBA Status Info Alert Notices: Not Supported 00:07:23.656 EGE Aggregate Log Change Notices: Not Supported 00:07:23.656 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.656 Zone Descriptor Change Notices: Not Supported 00:07:23.656 Discovery Log Change Notices: Not Supported 00:07:23.656 Controller Attributes 00:07:23.656 128-bit Host Identifier: Not Supported 00:07:23.656 Non-Operational Permissive Mode: Not Supported 00:07:23.656 NVM Sets: Not Supported 00:07:23.656 Read Recovery Levels: Not Supported 00:07:23.656 Endurance Groups: Not Supported 00:07:23.656 Predictable Latency Mode: Not Supported 00:07:23.656 Traffic Based Keep ALive: Not Supported 00:07:23.656 Namespace Granularity: Not Supported 00:07:23.656 SQ Associations: Not Supported 00:07:23.656 UUID List: Not Supported 00:07:23.656 Multi-Domain Subsystem: Not Supported 00:07:23.656 Fixed Capacity Management: Not Supported 00:07:23.656 Variable Capacity Management: Not Supported 00:07:23.656 Delete Endurance Group: Not Supported 00:07:23.656 Delete NVM Set: Not Supported 00:07:23.656 Extended LBA Formats Supported: Supported 00:07:23.656 Flexible Data Placement Supported: Not Supported 00:07:23.656 00:07:23.656 Controller Memory Buffer Support 00:07:23.656 ================================ 00:07:23.656 Supported: No 00:07:23.656 00:07:23.656 Persistent Memory Region Support 00:07:23.656 ================================ 00:07:23.656 Supported: No 00:07:23.656 00:07:23.656 Admin Command Set Attributes 00:07:23.656 ============================ 00:07:23.656 Security Send/Receive: Not Supported 00:07:23.656 Format NVM: Supported 00:07:23.656 Firmware Activate/Download: Not Supported 00:07:23.656 Namespace Management: Supported 00:07:23.656 Device Self-Test: Not Supported 00:07:23.656 Directives: Supported 00:07:23.656 NVMe-MI: Not Supported 00:07:23.656 Virtualization Management: Not Supported 00:07:23.656 Doorbell Buffer Config: Supported 00:07:23.656 Get LBA Status Capability: Not Supported 00:07:23.656 Command & Feature Lockdown Capability: Not Supported 00:07:23.656 Abort Command Limit: 4 00:07:23.656 Async Event Request Limit: 4 00:07:23.656 Number of Firmware Slots: N/A 00:07:23.656 Firmware Slot 1 Read-Only: N/A 00:07:23.656 Firmware Activation Without Reset: N/A 00:07:23.656 Multiple Update Detection Support: N/A 00:07:23.656 Firmware Update Granularity: No Information Provided 00:07:23.656 Per-Namespace SMART Log: Yes 00:07:23.656 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.656 Subsystem NQN: nqn.2019-08.org.qemu:12341 00:07:23.656 Command Effects Log Page: Supported 00:07:23.656 Get Log Page Extended Data: Supported 00:07:23.656 Telemetry Log Pages: Not Supported 00:07:23.656 Persistent Event Log Pages: Not Supported 00:07:23.656 Supported Log Pages Log Page: May Support 00:07:23.656 Commands Supported & Effects Log Page: Not Supported 00:07:23.656 Feature Identifiers & Effects Log Page:May Support 00:07:23.656 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.656 Data Area 4 for Telemetry Log: Not Supported 00:07:23.656 Error Log Page Entries Supported: 1 00:07:23.656 Keep Alive: Not Supported 00:07:23.656 00:07:23.656 NVM Command Set Attributes 00:07:23.656 ========================== 00:07:23.656 Submission Queue Entry Size 00:07:23.656 Max: 64 00:07:23.656 Min: 64 00:07:23.656 Completion Queue Entry Size 00:07:23.656 Max: 16 00:07:23.656 Min: 16 00:07:23.656 Number of Namespaces: 256 00:07:23.656 Compare Command: Supported 00:07:23.656 Write Uncorrectable Command: Not Supported 00:07:23.656 Dataset Management Command: Supported 00:07:23.656 Write Zeroes Command: Supported 00:07:23.656 Set Features Save Field: Supported 00:07:23.656 Reservations: Not Supported 00:07:23.656 Timestamp: Supported 00:07:23.656 Copy: Supported 00:07:23.656 Volatile Write Cache: Present 00:07:23.656 Atomic Write Unit (Normal): 1 00:07:23.656 Atomic Write Unit (PFail): 1 00:07:23.656 Atomic Compare & Write Unit: 1 00:07:23.656 Fused Compare & Write: Not Supported 00:07:23.656 Scatter-Gather List 00:07:23.656 SGL Command Set: Supported 00:07:23.656 SGL Keyed: Not Supported 00:07:23.656 SGL Bit Bucket Descriptor: Not Supported 00:07:23.656 SGL Metadata Pointer: Not Supported 00:07:23.656 Oversized SGL: Not Supported 00:07:23.656 SGL Metadata Address: Not Supported 00:07:23.656 SGL Offset: Not Supported 00:07:23.656 Transport SGL Data Block: Not Supported 00:07:23.656 Replay Protected Memory Block: Not Supported 00:07:23.656 00:07:23.656 Firmware Slot Information 00:07:23.656 ========================= 00:07:23.656 Active slot: 1 00:07:23.656 Slot 1 Firmware Revision: 1.0 00:07:23.656 00:07:23.656 00:07:23.656 Commands Supported and Effects 00:07:23.656 ============================== 00:07:23.656 Admin Commands 00:07:23.656 -------------- 00:07:23.656 Delete I/O Submission Queue (00h): Supported 00:07:23.656 Create I/O Submission Queue (01h): Supported 00:07:23.656 Get Log Page (02h): Supported 00:07:23.656 Delete I/O Completion Queue (04h): Supported 00:07:23.656 Create I/O Completion Queue (05h): Supported 00:07:23.656 Identify (06h): Supported 00:07:23.656 Abort (08h): Supported 00:07:23.656 Set Features (09h): Supported 00:07:23.656 Get Features (0Ah): Supported 00:07:23.656 Asynchronous Event Request (0Ch): Supported 00:07:23.656 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.656 Directive Send (19h): Supported 00:07:23.656 Directive Receive (1Ah): Supported 00:07:23.656 Virtualization Management (1Ch): Supported 00:07:23.656 Doorbell Buffer Config (7Ch): Supported 00:07:23.656 Format NVM (80h): Supported LBA-Change 00:07:23.656 I/O Commands 00:07:23.656 ------------ 00:07:23.656 Flush (00h): Supported LBA-Change 00:07:23.656 Write (01h): Supported LBA-Change 00:07:23.656 Read (02h): Supported 00:07:23.656 Compare (05h): Supported 00:07:23.656 Write Zeroes (08h): Supported LBA-Change 00:07:23.656 Dataset Management (09h): Supported LBA-Change 00:07:23.656 Unknown (0Ch): Supported 00:07:23.656 Unknown (12h): Supported 00:07:23.656 Copy (19h): Supported LBA-Change 00:07:23.656 Unknown (1Dh): Supported LBA-Change 00:07:23.656 00:07:23.656 Error Log 00:07:23.656 ========= 00:07:23.656 00:07:23.656 Arbitration 00:07:23.656 =========== 00:07:23.656 Arbitration Burst: no limit 00:07:23.656 00:07:23.656 Power Management 00:07:23.656 ================ 00:07:23.656 Number of Power States: 1 00:07:23.656 Current Power State: Power State #0 00:07:23.656 Power State #0: 00:07:23.656 Max Power: 25.00 W 00:07:23.656 Non-Operational State: Operational 00:07:23.656 Entry Latency: 16 microseconds 00:07:23.656 Exit Latency: 4 microseconds 00:07:23.656 Relative Read Throughput: 0 00:07:23.656 Relative Read Latency: 0 00:07:23.656 Relative Write Throughput: 0 00:07:23.656 Relative Write Latency: 0 00:07:23.656 Idle Power: Not Reported 00:07:23.656 Active Power: Not Reported 00:07:23.656 Non-Operational Permissive Mode: Not Supported 00:07:23.656 00:07:23.656 Health Information 00:07:23.656 ================== 00:07:23.656 Critical Warnings: 00:07:23.656 Available Spare Space: OK 00:07:23.656 Temperature: OK 00:07:23.656 Device Reliability: OK 00:07:23.656 Read Only: No 00:07:23.656 Volatile Memory Backup: OK 00:07:23.656 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.656 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.656 Available Spare: 0% 00:07:23.656 Available Spare Threshold: 0% 00:07:23.656 Life Percentage Used: 0% 00:07:23.656 Data Units Read: 985 00:07:23.656 Data Units Written: 852 00:07:23.656 Host Read Commands: 50946 00:07:23.656 Host Write Commands: 49744 00:07:23.656 Controller Busy Time: 0 minutes 00:07:23.656 Power Cycles: 0 00:07:23.656 Power On Hours: 0 hours 00:07:23.656 Unsafe Shutdowns: 0 00:07:23.656 Unrecoverable Media Errors: 0 00:07:23.656 Lifetime Error Log Entries: 0 00:07:23.656 Warning Temperature Time: 0 minutes 00:07:23.656 Critical Temperature Time: 0 minutes 00:07:23.656 00:07:23.656 Number of Queues 00:07:23.656 ================ 00:07:23.656 Number of I/O Submission Queues: 64 00:07:23.656 Number of I/O Completion Queues: 64 00:07:23.656 00:07:23.656 ZNS Specific Controller Data 00:07:23.656 ============================ 00:07:23.656 Zone Append Size Limit: 0 00:07:23.656 00:07:23.656 00:07:23.656 Active Namespaces 00:07:23.656 ================= 00:07:23.656 Namespace ID:1 00:07:23.656 Error Recovery Timeout: Unlimited 00:07:23.656 Command Set Identifier: NVM (00h) 00:07:23.656 Deallocate: Supported 00:07:23.656 Deallocated/Unwritten Error: Supported 00:07:23.657 Deallocated Read Value: All 0x00 00:07:23.657 Deallocate in Write Zeroes: Not Supported 00:07:23.657 Deallocated Guard Field: 0xFFFF 00:07:23.657 Flush: Supported 00:07:23.657 Reservation: Not Supported 00:07:23.657 Namespace Sharing Capabilities: Private 00:07:23.657 Size (in LBAs): 1310720 (5GiB) 00:07:23.657 Capacity (in LBAs): 1310720 (5GiB) 00:07:23.657 Utilization (in LBAs): 1310720 (5GiB) 00:07:23.657 Thin Provisioning: Not Supported 00:07:23.657 Per-NS Atomic Units: No 00:07:23.657 Maximum Single Source Range Length: 128 00:07:23.657 Maximum Copy Length: 128 00:07:23.657 Maximum Source Range Count: 128 00:07:23.657 NGUID/EUI64 Never Reused: No 00:07:23.657 Namespace Write Protected: No 00:07:23.657 Number of LBA Formats: 8 00:07:23.657 Current LBA Format: LBA Format #04 00:07:23.657 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.657 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.657 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.657 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.657 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.657 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.657 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.657 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.657 00:07:23.657 NVM Specific Namespace Data 00:07:23.657 =========================== 00:07:23.657 Logical Block Storage Tag Mask: 0 00:07:23.657 Protection Information Capabilities: 00:07:23.657 16b Guard Protection Information Storage Tag Support: No 00:07:23.657 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.657 Storage Tag Check Read Support: No 00:07:23.657 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.657 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.657 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.657 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.657 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.657 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.657 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.657 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.657 02:02:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:23.657 02:02:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' -i 0 00:07:23.916 ===================================================== 00:07:23.916 NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:23.916 ===================================================== 00:07:23.916 Controller Capabilities/Features 00:07:23.916 ================================ 00:07:23.916 Vendor ID: 1b36 00:07:23.916 Subsystem Vendor ID: 1af4 00:07:23.916 Serial Number: 12342 00:07:23.916 Model Number: QEMU NVMe Ctrl 00:07:23.916 Firmware Version: 8.0.0 00:07:23.916 Recommended Arb Burst: 6 00:07:23.916 IEEE OUI Identifier: 00 54 52 00:07:23.916 Multi-path I/O 00:07:23.916 May have multiple subsystem ports: No 00:07:23.916 May have multiple controllers: No 00:07:23.916 Associated with SR-IOV VF: No 00:07:23.916 Max Data Transfer Size: 524288 00:07:23.916 Max Number of Namespaces: 256 00:07:23.916 Max Number of I/O Queues: 64 00:07:23.916 NVMe Specification Version (VS): 1.4 00:07:23.916 NVMe Specification Version (Identify): 1.4 00:07:23.916 Maximum Queue Entries: 2048 00:07:23.916 Contiguous Queues Required: Yes 00:07:23.916 Arbitration Mechanisms Supported 00:07:23.916 Weighted Round Robin: Not Supported 00:07:23.916 Vendor Specific: Not Supported 00:07:23.916 Reset Timeout: 7500 ms 00:07:23.916 Doorbell Stride: 4 bytes 00:07:23.916 NVM Subsystem Reset: Not Supported 00:07:23.916 Command Sets Supported 00:07:23.916 NVM Command Set: Supported 00:07:23.916 Boot Partition: Not Supported 00:07:23.916 Memory Page Size Minimum: 4096 bytes 00:07:23.916 Memory Page Size Maximum: 65536 bytes 00:07:23.916 Persistent Memory Region: Not Supported 00:07:23.916 Optional Asynchronous Events Supported 00:07:23.916 Namespace Attribute Notices: Supported 00:07:23.916 Firmware Activation Notices: Not Supported 00:07:23.916 ANA Change Notices: Not Supported 00:07:23.916 PLE Aggregate Log Change Notices: Not Supported 00:07:23.916 LBA Status Info Alert Notices: Not Supported 00:07:23.917 EGE Aggregate Log Change Notices: Not Supported 00:07:23.917 Normal NVM Subsystem Shutdown event: Not Supported 00:07:23.917 Zone Descriptor Change Notices: Not Supported 00:07:23.917 Discovery Log Change Notices: Not Supported 00:07:23.917 Controller Attributes 00:07:23.917 128-bit Host Identifier: Not Supported 00:07:23.917 Non-Operational Permissive Mode: Not Supported 00:07:23.917 NVM Sets: Not Supported 00:07:23.917 Read Recovery Levels: Not Supported 00:07:23.917 Endurance Groups: Not Supported 00:07:23.917 Predictable Latency Mode: Not Supported 00:07:23.917 Traffic Based Keep ALive: Not Supported 00:07:23.917 Namespace Granularity: Not Supported 00:07:23.917 SQ Associations: Not Supported 00:07:23.917 UUID List: Not Supported 00:07:23.917 Multi-Domain Subsystem: Not Supported 00:07:23.917 Fixed Capacity Management: Not Supported 00:07:23.917 Variable Capacity Management: Not Supported 00:07:23.917 Delete Endurance Group: Not Supported 00:07:23.917 Delete NVM Set: Not Supported 00:07:23.917 Extended LBA Formats Supported: Supported 00:07:23.917 Flexible Data Placement Supported: Not Supported 00:07:23.917 00:07:23.917 Controller Memory Buffer Support 00:07:23.917 ================================ 00:07:23.917 Supported: No 00:07:23.917 00:07:23.917 Persistent Memory Region Support 00:07:23.917 ================================ 00:07:23.917 Supported: No 00:07:23.917 00:07:23.917 Admin Command Set Attributes 00:07:23.917 ============================ 00:07:23.917 Security Send/Receive: Not Supported 00:07:23.917 Format NVM: Supported 00:07:23.917 Firmware Activate/Download: Not Supported 00:07:23.917 Namespace Management: Supported 00:07:23.917 Device Self-Test: Not Supported 00:07:23.917 Directives: Supported 00:07:23.917 NVMe-MI: Not Supported 00:07:23.917 Virtualization Management: Not Supported 00:07:23.917 Doorbell Buffer Config: Supported 00:07:23.917 Get LBA Status Capability: Not Supported 00:07:23.917 Command & Feature Lockdown Capability: Not Supported 00:07:23.917 Abort Command Limit: 4 00:07:23.917 Async Event Request Limit: 4 00:07:23.917 Number of Firmware Slots: N/A 00:07:23.917 Firmware Slot 1 Read-Only: N/A 00:07:23.917 Firmware Activation Without Reset: N/A 00:07:23.917 Multiple Update Detection Support: N/A 00:07:23.917 Firmware Update Granularity: No Information Provided 00:07:23.917 Per-Namespace SMART Log: Yes 00:07:23.917 Asymmetric Namespace Access Log Page: Not Supported 00:07:23.917 Subsystem NQN: nqn.2019-08.org.qemu:12342 00:07:23.917 Command Effects Log Page: Supported 00:07:23.917 Get Log Page Extended Data: Supported 00:07:23.917 Telemetry Log Pages: Not Supported 00:07:23.917 Persistent Event Log Pages: Not Supported 00:07:23.917 Supported Log Pages Log Page: May Support 00:07:23.917 Commands Supported & Effects Log Page: Not Supported 00:07:23.917 Feature Identifiers & Effects Log Page:May Support 00:07:23.917 NVMe-MI Commands & Effects Log Page: May Support 00:07:23.917 Data Area 4 for Telemetry Log: Not Supported 00:07:23.917 Error Log Page Entries Supported: 1 00:07:23.917 Keep Alive: Not Supported 00:07:23.917 00:07:23.917 NVM Command Set Attributes 00:07:23.917 ========================== 00:07:23.917 Submission Queue Entry Size 00:07:23.917 Max: 64 00:07:23.917 Min: 64 00:07:23.917 Completion Queue Entry Size 00:07:23.917 Max: 16 00:07:23.917 Min: 16 00:07:23.917 Number of Namespaces: 256 00:07:23.917 Compare Command: Supported 00:07:23.917 Write Uncorrectable Command: Not Supported 00:07:23.917 Dataset Management Command: Supported 00:07:23.917 Write Zeroes Command: Supported 00:07:23.917 Set Features Save Field: Supported 00:07:23.917 Reservations: Not Supported 00:07:23.917 Timestamp: Supported 00:07:23.917 Copy: Supported 00:07:23.917 Volatile Write Cache: Present 00:07:23.917 Atomic Write Unit (Normal): 1 00:07:23.917 Atomic Write Unit (PFail): 1 00:07:23.917 Atomic Compare & Write Unit: 1 00:07:23.917 Fused Compare & Write: Not Supported 00:07:23.917 Scatter-Gather List 00:07:23.917 SGL Command Set: Supported 00:07:23.917 SGL Keyed: Not Supported 00:07:23.917 SGL Bit Bucket Descriptor: Not Supported 00:07:23.917 SGL Metadata Pointer: Not Supported 00:07:23.917 Oversized SGL: Not Supported 00:07:23.917 SGL Metadata Address: Not Supported 00:07:23.917 SGL Offset: Not Supported 00:07:23.917 Transport SGL Data Block: Not Supported 00:07:23.917 Replay Protected Memory Block: Not Supported 00:07:23.917 00:07:23.917 Firmware Slot Information 00:07:23.917 ========================= 00:07:23.917 Active slot: 1 00:07:23.917 Slot 1 Firmware Revision: 1.0 00:07:23.917 00:07:23.917 00:07:23.917 Commands Supported and Effects 00:07:23.917 ============================== 00:07:23.917 Admin Commands 00:07:23.917 -------------- 00:07:23.917 Delete I/O Submission Queue (00h): Supported 00:07:23.917 Create I/O Submission Queue (01h): Supported 00:07:23.917 Get Log Page (02h): Supported 00:07:23.917 Delete I/O Completion Queue (04h): Supported 00:07:23.917 Create I/O Completion Queue (05h): Supported 00:07:23.917 Identify (06h): Supported 00:07:23.917 Abort (08h): Supported 00:07:23.917 Set Features (09h): Supported 00:07:23.917 Get Features (0Ah): Supported 00:07:23.917 Asynchronous Event Request (0Ch): Supported 00:07:23.917 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:23.917 Directive Send (19h): Supported 00:07:23.917 Directive Receive (1Ah): Supported 00:07:23.917 Virtualization Management (1Ch): Supported 00:07:23.917 Doorbell Buffer Config (7Ch): Supported 00:07:23.917 Format NVM (80h): Supported LBA-Change 00:07:23.917 I/O Commands 00:07:23.917 ------------ 00:07:23.917 Flush (00h): Supported LBA-Change 00:07:23.917 Write (01h): Supported LBA-Change 00:07:23.917 Read (02h): Supported 00:07:23.917 Compare (05h): Supported 00:07:23.917 Write Zeroes (08h): Supported LBA-Change 00:07:23.917 Dataset Management (09h): Supported LBA-Change 00:07:23.917 Unknown (0Ch): Supported 00:07:23.917 Unknown (12h): Supported 00:07:23.917 Copy (19h): Supported LBA-Change 00:07:23.917 Unknown (1Dh): Supported LBA-Change 00:07:23.917 00:07:23.917 Error Log 00:07:23.917 ========= 00:07:23.917 00:07:23.917 Arbitration 00:07:23.917 =========== 00:07:23.917 Arbitration Burst: no limit 00:07:23.917 00:07:23.917 Power Management 00:07:23.917 ================ 00:07:23.917 Number of Power States: 1 00:07:23.917 Current Power State: Power State #0 00:07:23.917 Power State #0: 00:07:23.917 Max Power: 25.00 W 00:07:23.917 Non-Operational State: Operational 00:07:23.917 Entry Latency: 16 microseconds 00:07:23.917 Exit Latency: 4 microseconds 00:07:23.917 Relative Read Throughput: 0 00:07:23.917 Relative Read Latency: 0 00:07:23.917 Relative Write Throughput: 0 00:07:23.917 Relative Write Latency: 0 00:07:23.917 Idle Power: Not Reported 00:07:23.917 Active Power: Not Reported 00:07:23.917 Non-Operational Permissive Mode: Not Supported 00:07:23.917 00:07:23.917 Health Information 00:07:23.917 ================== 00:07:23.917 Critical Warnings: 00:07:23.917 Available Spare Space: OK 00:07:23.917 Temperature: OK 00:07:23.917 Device Reliability: OK 00:07:23.917 Read Only: No 00:07:23.917 Volatile Memory Backup: OK 00:07:23.917 Current Temperature: 323 Kelvin (50 Celsius) 00:07:23.917 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:23.917 Available Spare: 0% 00:07:23.917 Available Spare Threshold: 0% 00:07:23.917 Life Percentage Used: 0% 00:07:23.917 Data Units Read: 2123 00:07:23.917 Data Units Written: 1910 00:07:23.917 Host Read Commands: 105144 00:07:23.917 Host Write Commands: 103414 00:07:23.917 Controller Busy Time: 0 minutes 00:07:23.917 Power Cycles: 0 00:07:23.917 Power On Hours: 0 hours 00:07:23.917 Unsafe Shutdowns: 0 00:07:23.917 Unrecoverable Media Errors: 0 00:07:23.917 Lifetime Error Log Entries: 0 00:07:23.917 Warning Temperature Time: 0 minutes 00:07:23.917 Critical Temperature Time: 0 minutes 00:07:23.917 00:07:23.917 Number of Queues 00:07:23.917 ================ 00:07:23.917 Number of I/O Submission Queues: 64 00:07:23.917 Number of I/O Completion Queues: 64 00:07:23.917 00:07:23.917 ZNS Specific Controller Data 00:07:23.917 ============================ 00:07:23.917 Zone Append Size Limit: 0 00:07:23.917 00:07:23.917 00:07:23.917 Active Namespaces 00:07:23.917 ================= 00:07:23.917 Namespace ID:1 00:07:23.917 Error Recovery Timeout: Unlimited 00:07:23.917 Command Set Identifier: NVM (00h) 00:07:23.917 Deallocate: Supported 00:07:23.917 Deallocated/Unwritten Error: Supported 00:07:23.917 Deallocated Read Value: All 0x00 00:07:23.917 Deallocate in Write Zeroes: Not Supported 00:07:23.917 Deallocated Guard Field: 0xFFFF 00:07:23.917 Flush: Supported 00:07:23.917 Reservation: Not Supported 00:07:23.917 Namespace Sharing Capabilities: Private 00:07:23.917 Size (in LBAs): 1048576 (4GiB) 00:07:23.917 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.917 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.917 Thin Provisioning: Not Supported 00:07:23.917 Per-NS Atomic Units: No 00:07:23.918 Maximum Single Source Range Length: 128 00:07:23.918 Maximum Copy Length: 128 00:07:23.918 Maximum Source Range Count: 128 00:07:23.918 NGUID/EUI64 Never Reused: No 00:07:23.918 Namespace Write Protected: No 00:07:23.918 Number of LBA Formats: 8 00:07:23.918 Current LBA Format: LBA Format #04 00:07:23.918 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.918 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.918 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.918 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.918 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.918 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.918 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.918 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.918 00:07:23.918 NVM Specific Namespace Data 00:07:23.918 =========================== 00:07:23.918 Logical Block Storage Tag Mask: 0 00:07:23.918 Protection Information Capabilities: 00:07:23.918 16b Guard Protection Information Storage Tag Support: No 00:07:23.918 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.918 Storage Tag Check Read Support: No 00:07:23.918 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Namespace ID:2 00:07:23.918 Error Recovery Timeout: Unlimited 00:07:23.918 Command Set Identifier: NVM (00h) 00:07:23.918 Deallocate: Supported 00:07:23.918 Deallocated/Unwritten Error: Supported 00:07:23.918 Deallocated Read Value: All 0x00 00:07:23.918 Deallocate in Write Zeroes: Not Supported 00:07:23.918 Deallocated Guard Field: 0xFFFF 00:07:23.918 Flush: Supported 00:07:23.918 Reservation: Not Supported 00:07:23.918 Namespace Sharing Capabilities: Private 00:07:23.918 Size (in LBAs): 1048576 (4GiB) 00:07:23.918 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.918 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.918 Thin Provisioning: Not Supported 00:07:23.918 Per-NS Atomic Units: No 00:07:23.918 Maximum Single Source Range Length: 128 00:07:23.918 Maximum Copy Length: 128 00:07:23.918 Maximum Source Range Count: 128 00:07:23.918 NGUID/EUI64 Never Reused: No 00:07:23.918 Namespace Write Protected: No 00:07:23.918 Number of LBA Formats: 8 00:07:23.918 Current LBA Format: LBA Format #04 00:07:23.918 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.918 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.918 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.918 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.918 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.918 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.918 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.918 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.918 00:07:23.918 NVM Specific Namespace Data 00:07:23.918 =========================== 00:07:23.918 Logical Block Storage Tag Mask: 0 00:07:23.918 Protection Information Capabilities: 00:07:23.918 16b Guard Protection Information Storage Tag Support: No 00:07:23.918 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.918 Storage Tag Check Read Support: No 00:07:23.918 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Namespace ID:3 00:07:23.918 Error Recovery Timeout: Unlimited 00:07:23.918 Command Set Identifier: NVM (00h) 00:07:23.918 Deallocate: Supported 00:07:23.918 Deallocated/Unwritten Error: Supported 00:07:23.918 Deallocated Read Value: All 0x00 00:07:23.918 Deallocate in Write Zeroes: Not Supported 00:07:23.918 Deallocated Guard Field: 0xFFFF 00:07:23.918 Flush: Supported 00:07:23.918 Reservation: Not Supported 00:07:23.918 Namespace Sharing Capabilities: Private 00:07:23.918 Size (in LBAs): 1048576 (4GiB) 00:07:23.918 Capacity (in LBAs): 1048576 (4GiB) 00:07:23.918 Utilization (in LBAs): 1048576 (4GiB) 00:07:23.918 Thin Provisioning: Not Supported 00:07:23.918 Per-NS Atomic Units: No 00:07:23.918 Maximum Single Source Range Length: 128 00:07:23.918 Maximum Copy Length: 128 00:07:23.918 Maximum Source Range Count: 128 00:07:23.918 NGUID/EUI64 Never Reused: No 00:07:23.918 Namespace Write Protected: No 00:07:23.918 Number of LBA Formats: 8 00:07:23.918 Current LBA Format: LBA Format #04 00:07:23.918 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:23.918 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:23.918 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:23.918 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:23.918 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:23.918 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:23.918 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:23.918 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:23.918 00:07:23.918 NVM Specific Namespace Data 00:07:23.918 =========================== 00:07:23.918 Logical Block Storage Tag Mask: 0 00:07:23.918 Protection Information Capabilities: 00:07:23.918 16b Guard Protection Information Storage Tag Support: No 00:07:23.918 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:23.918 Storage Tag Check Read Support: No 00:07:23.918 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:23.918 02:02:48 nvme.nvme_identify -- nvme/nvme.sh@15 -- # for bdf in "${bdfs[@]}" 00:07:23.918 02:02:48 nvme.nvme_identify -- nvme/nvme.sh@16 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' -i 0 00:07:24.178 ===================================================== 00:07:24.178 NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:24.178 ===================================================== 00:07:24.178 Controller Capabilities/Features 00:07:24.178 ================================ 00:07:24.178 Vendor ID: 1b36 00:07:24.178 Subsystem Vendor ID: 1af4 00:07:24.178 Serial Number: 12343 00:07:24.178 Model Number: QEMU NVMe Ctrl 00:07:24.178 Firmware Version: 8.0.0 00:07:24.178 Recommended Arb Burst: 6 00:07:24.178 IEEE OUI Identifier: 00 54 52 00:07:24.178 Multi-path I/O 00:07:24.178 May have multiple subsystem ports: No 00:07:24.178 May have multiple controllers: Yes 00:07:24.178 Associated with SR-IOV VF: No 00:07:24.178 Max Data Transfer Size: 524288 00:07:24.178 Max Number of Namespaces: 256 00:07:24.178 Max Number of I/O Queues: 64 00:07:24.178 NVMe Specification Version (VS): 1.4 00:07:24.178 NVMe Specification Version (Identify): 1.4 00:07:24.178 Maximum Queue Entries: 2048 00:07:24.178 Contiguous Queues Required: Yes 00:07:24.178 Arbitration Mechanisms Supported 00:07:24.178 Weighted Round Robin: Not Supported 00:07:24.178 Vendor Specific: Not Supported 00:07:24.178 Reset Timeout: 7500 ms 00:07:24.178 Doorbell Stride: 4 bytes 00:07:24.178 NVM Subsystem Reset: Not Supported 00:07:24.178 Command Sets Supported 00:07:24.178 NVM Command Set: Supported 00:07:24.178 Boot Partition: Not Supported 00:07:24.178 Memory Page Size Minimum: 4096 bytes 00:07:24.178 Memory Page Size Maximum: 65536 bytes 00:07:24.178 Persistent Memory Region: Not Supported 00:07:24.178 Optional Asynchronous Events Supported 00:07:24.178 Namespace Attribute Notices: Supported 00:07:24.178 Firmware Activation Notices: Not Supported 00:07:24.178 ANA Change Notices: Not Supported 00:07:24.178 PLE Aggregate Log Change Notices: Not Supported 00:07:24.178 LBA Status Info Alert Notices: Not Supported 00:07:24.178 EGE Aggregate Log Change Notices: Not Supported 00:07:24.178 Normal NVM Subsystem Shutdown event: Not Supported 00:07:24.178 Zone Descriptor Change Notices: Not Supported 00:07:24.178 Discovery Log Change Notices: Not Supported 00:07:24.178 Controller Attributes 00:07:24.178 128-bit Host Identifier: Not Supported 00:07:24.178 Non-Operational Permissive Mode: Not Supported 00:07:24.178 NVM Sets: Not Supported 00:07:24.178 Read Recovery Levels: Not Supported 00:07:24.178 Endurance Groups: Supported 00:07:24.178 Predictable Latency Mode: Not Supported 00:07:24.178 Traffic Based Keep ALive: Not Supported 00:07:24.178 Namespace Granularity: Not Supported 00:07:24.178 SQ Associations: Not Supported 00:07:24.178 UUID List: Not Supported 00:07:24.178 Multi-Domain Subsystem: Not Supported 00:07:24.178 Fixed Capacity Management: Not Supported 00:07:24.178 Variable Capacity Management: Not Supported 00:07:24.178 Delete Endurance Group: Not Supported 00:07:24.178 Delete NVM Set: Not Supported 00:07:24.178 Extended LBA Formats Supported: Supported 00:07:24.178 Flexible Data Placement Supported: Supported 00:07:24.178 00:07:24.178 Controller Memory Buffer Support 00:07:24.178 ================================ 00:07:24.178 Supported: No 00:07:24.178 00:07:24.178 Persistent Memory Region Support 00:07:24.178 ================================ 00:07:24.178 Supported: No 00:07:24.178 00:07:24.178 Admin Command Set Attributes 00:07:24.178 ============================ 00:07:24.178 Security Send/Receive: Not Supported 00:07:24.178 Format NVM: Supported 00:07:24.178 Firmware Activate/Download: Not Supported 00:07:24.178 Namespace Management: Supported 00:07:24.178 Device Self-Test: Not Supported 00:07:24.178 Directives: Supported 00:07:24.178 NVMe-MI: Not Supported 00:07:24.178 Virtualization Management: Not Supported 00:07:24.178 Doorbell Buffer Config: Supported 00:07:24.178 Get LBA Status Capability: Not Supported 00:07:24.178 Command & Feature Lockdown Capability: Not Supported 00:07:24.178 Abort Command Limit: 4 00:07:24.178 Async Event Request Limit: 4 00:07:24.178 Number of Firmware Slots: N/A 00:07:24.178 Firmware Slot 1 Read-Only: N/A 00:07:24.178 Firmware Activation Without Reset: N/A 00:07:24.178 Multiple Update Detection Support: N/A 00:07:24.178 Firmware Update Granularity: No Information Provided 00:07:24.178 Per-Namespace SMART Log: Yes 00:07:24.178 Asymmetric Namespace Access Log Page: Not Supported 00:07:24.178 Subsystem NQN: nqn.2019-08.org.qemu:fdp-subsys3 00:07:24.178 Command Effects Log Page: Supported 00:07:24.178 Get Log Page Extended Data: Supported 00:07:24.178 Telemetry Log Pages: Not Supported 00:07:24.178 Persistent Event Log Pages: Not Supported 00:07:24.178 Supported Log Pages Log Page: May Support 00:07:24.178 Commands Supported & Effects Log Page: Not Supported 00:07:24.178 Feature Identifiers & Effects Log Page:May Support 00:07:24.178 NVMe-MI Commands & Effects Log Page: May Support 00:07:24.178 Data Area 4 for Telemetry Log: Not Supported 00:07:24.178 Error Log Page Entries Supported: 1 00:07:24.178 Keep Alive: Not Supported 00:07:24.178 00:07:24.178 NVM Command Set Attributes 00:07:24.178 ========================== 00:07:24.178 Submission Queue Entry Size 00:07:24.178 Max: 64 00:07:24.178 Min: 64 00:07:24.178 Completion Queue Entry Size 00:07:24.178 Max: 16 00:07:24.178 Min: 16 00:07:24.178 Number of Namespaces: 256 00:07:24.178 Compare Command: Supported 00:07:24.178 Write Uncorrectable Command: Not Supported 00:07:24.178 Dataset Management Command: Supported 00:07:24.178 Write Zeroes Command: Supported 00:07:24.178 Set Features Save Field: Supported 00:07:24.178 Reservations: Not Supported 00:07:24.178 Timestamp: Supported 00:07:24.178 Copy: Supported 00:07:24.178 Volatile Write Cache: Present 00:07:24.178 Atomic Write Unit (Normal): 1 00:07:24.178 Atomic Write Unit (PFail): 1 00:07:24.178 Atomic Compare & Write Unit: 1 00:07:24.178 Fused Compare & Write: Not Supported 00:07:24.178 Scatter-Gather List 00:07:24.178 SGL Command Set: Supported 00:07:24.178 SGL Keyed: Not Supported 00:07:24.178 SGL Bit Bucket Descriptor: Not Supported 00:07:24.178 SGL Metadata Pointer: Not Supported 00:07:24.178 Oversized SGL: Not Supported 00:07:24.178 SGL Metadata Address: Not Supported 00:07:24.178 SGL Offset: Not Supported 00:07:24.178 Transport SGL Data Block: Not Supported 00:07:24.178 Replay Protected Memory Block: Not Supported 00:07:24.178 00:07:24.178 Firmware Slot Information 00:07:24.178 ========================= 00:07:24.178 Active slot: 1 00:07:24.178 Slot 1 Firmware Revision: 1.0 00:07:24.178 00:07:24.178 00:07:24.178 Commands Supported and Effects 00:07:24.178 ============================== 00:07:24.178 Admin Commands 00:07:24.178 -------------- 00:07:24.178 Delete I/O Submission Queue (00h): Supported 00:07:24.178 Create I/O Submission Queue (01h): Supported 00:07:24.178 Get Log Page (02h): Supported 00:07:24.178 Delete I/O Completion Queue (04h): Supported 00:07:24.178 Create I/O Completion Queue (05h): Supported 00:07:24.178 Identify (06h): Supported 00:07:24.178 Abort (08h): Supported 00:07:24.178 Set Features (09h): Supported 00:07:24.178 Get Features (0Ah): Supported 00:07:24.178 Asynchronous Event Request (0Ch): Supported 00:07:24.178 Namespace Attachment (15h): Supported NS-Inventory-Change 00:07:24.178 Directive Send (19h): Supported 00:07:24.178 Directive Receive (1Ah): Supported 00:07:24.178 Virtualization Management (1Ch): Supported 00:07:24.178 Doorbell Buffer Config (7Ch): Supported 00:07:24.179 Format NVM (80h): Supported LBA-Change 00:07:24.179 I/O Commands 00:07:24.179 ------------ 00:07:24.179 Flush (00h): Supported LBA-Change 00:07:24.179 Write (01h): Supported LBA-Change 00:07:24.179 Read (02h): Supported 00:07:24.179 Compare (05h): Supported 00:07:24.179 Write Zeroes (08h): Supported LBA-Change 00:07:24.179 Dataset Management (09h): Supported LBA-Change 00:07:24.179 Unknown (0Ch): Supported 00:07:24.179 Unknown (12h): Supported 00:07:24.179 Copy (19h): Supported LBA-Change 00:07:24.179 Unknown (1Dh): Supported LBA-Change 00:07:24.179 00:07:24.179 Error Log 00:07:24.179 ========= 00:07:24.179 00:07:24.179 Arbitration 00:07:24.179 =========== 00:07:24.179 Arbitration Burst: no limit 00:07:24.179 00:07:24.179 Power Management 00:07:24.179 ================ 00:07:24.179 Number of Power States: 1 00:07:24.179 Current Power State: Power State #0 00:07:24.179 Power State #0: 00:07:24.179 Max Power: 25.00 W 00:07:24.179 Non-Operational State: Operational 00:07:24.179 Entry Latency: 16 microseconds 00:07:24.179 Exit Latency: 4 microseconds 00:07:24.179 Relative Read Throughput: 0 00:07:24.179 Relative Read Latency: 0 00:07:24.179 Relative Write Throughput: 0 00:07:24.179 Relative Write Latency: 0 00:07:24.179 Idle Power: Not Reported 00:07:24.179 Active Power: Not Reported 00:07:24.179 Non-Operational Permissive Mode: Not Supported 00:07:24.179 00:07:24.179 Health Information 00:07:24.179 ================== 00:07:24.179 Critical Warnings: 00:07:24.179 Available Spare Space: OK 00:07:24.179 Temperature: OK 00:07:24.179 Device Reliability: OK 00:07:24.179 Read Only: No 00:07:24.179 Volatile Memory Backup: OK 00:07:24.179 Current Temperature: 323 Kelvin (50 Celsius) 00:07:24.179 Temperature Threshold: 343 Kelvin (70 Celsius) 00:07:24.179 Available Spare: 0% 00:07:24.179 Available Spare Threshold: 0% 00:07:24.179 Life Percentage Used: 0% 00:07:24.179 Data Units Read: 820 00:07:24.179 Data Units Written: 749 00:07:24.179 Host Read Commands: 36066 00:07:24.179 Host Write Commands: 35489 00:07:24.179 Controller Busy Time: 0 minutes 00:07:24.179 Power Cycles: 0 00:07:24.179 Power On Hours: 0 hours 00:07:24.179 Unsafe Shutdowns: 0 00:07:24.179 Unrecoverable Media Errors: 0 00:07:24.179 Lifetime Error Log Entries: 0 00:07:24.179 Warning Temperature Time: 0 minutes 00:07:24.179 Critical Temperature Time: 0 minutes 00:07:24.179 00:07:24.179 Number of Queues 00:07:24.179 ================ 00:07:24.179 Number of I/O Submission Queues: 64 00:07:24.179 Number of I/O Completion Queues: 64 00:07:24.179 00:07:24.179 ZNS Specific Controller Data 00:07:24.179 ============================ 00:07:24.179 Zone Append Size Limit: 0 00:07:24.179 00:07:24.179 00:07:24.179 Active Namespaces 00:07:24.179 ================= 00:07:24.179 Namespace ID:1 00:07:24.179 Error Recovery Timeout: Unlimited 00:07:24.179 Command Set Identifier: NVM (00h) 00:07:24.179 Deallocate: Supported 00:07:24.179 Deallocated/Unwritten Error: Supported 00:07:24.179 Deallocated Read Value: All 0x00 00:07:24.179 Deallocate in Write Zeroes: Not Supported 00:07:24.179 Deallocated Guard Field: 0xFFFF 00:07:24.179 Flush: Supported 00:07:24.179 Reservation: Not Supported 00:07:24.179 Namespace Sharing Capabilities: Multiple Controllers 00:07:24.179 Size (in LBAs): 262144 (1GiB) 00:07:24.179 Capacity (in LBAs): 262144 (1GiB) 00:07:24.179 Utilization (in LBAs): 262144 (1GiB) 00:07:24.179 Thin Provisioning: Not Supported 00:07:24.179 Per-NS Atomic Units: No 00:07:24.179 Maximum Single Source Range Length: 128 00:07:24.179 Maximum Copy Length: 128 00:07:24.179 Maximum Source Range Count: 128 00:07:24.179 NGUID/EUI64 Never Reused: No 00:07:24.179 Namespace Write Protected: No 00:07:24.179 Endurance group ID: 1 00:07:24.179 Number of LBA Formats: 8 00:07:24.179 Current LBA Format: LBA Format #04 00:07:24.179 LBA Format #00: Data Size: 512 Metadata Size: 0 00:07:24.179 LBA Format #01: Data Size: 512 Metadata Size: 8 00:07:24.179 LBA Format #02: Data Size: 512 Metadata Size: 16 00:07:24.179 LBA Format #03: Data Size: 512 Metadata Size: 64 00:07:24.179 LBA Format #04: Data Size: 4096 Metadata Size: 0 00:07:24.179 LBA Format #05: Data Size: 4096 Metadata Size: 8 00:07:24.179 LBA Format #06: Data Size: 4096 Metadata Size: 16 00:07:24.179 LBA Format #07: Data Size: 4096 Metadata Size: 64 00:07:24.179 00:07:24.179 Get Feature FDP: 00:07:24.179 ================ 00:07:24.179 Enabled: Yes 00:07:24.179 FDP configuration index: 0 00:07:24.179 00:07:24.179 FDP configurations log page 00:07:24.179 =========================== 00:07:24.179 Number of FDP configurations: 1 00:07:24.179 Version: 0 00:07:24.179 Size: 112 00:07:24.179 FDP Configuration Descriptor: 0 00:07:24.179 Descriptor Size: 96 00:07:24.179 Reclaim Group Identifier format: 2 00:07:24.179 FDP Volatile Write Cache: Not Present 00:07:24.179 FDP Configuration: Valid 00:07:24.179 Vendor Specific Size: 0 00:07:24.179 Number of Reclaim Groups: 2 00:07:24.179 Number of Recalim Unit Handles: 8 00:07:24.179 Max Placement Identifiers: 128 00:07:24.179 Number of Namespaces Suppprted: 256 00:07:24.179 Reclaim unit Nominal Size: 6000000 bytes 00:07:24.179 Estimated Reclaim Unit Time Limit: Not Reported 00:07:24.179 RUH Desc #000: RUH Type: Initially Isolated 00:07:24.179 RUH Desc #001: RUH Type: Initially Isolated 00:07:24.179 RUH Desc #002: RUH Type: Initially Isolated 00:07:24.179 RUH Desc #003: RUH Type: Initially Isolated 00:07:24.179 RUH Desc #004: RUH Type: Initially Isolated 00:07:24.179 RUH Desc #005: RUH Type: Initially Isolated 00:07:24.179 RUH Desc #006: RUH Type: Initially Isolated 00:07:24.179 RUH Desc #007: RUH Type: Initially Isolated 00:07:24.179 00:07:24.179 FDP reclaim unit handle usage log page 00:07:24.179 ====================================== 00:07:24.179 Number of Reclaim Unit Handles: 8 00:07:24.179 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:07:24.179 RUH Usage Desc #001: RUH Attributes: Unused 00:07:24.179 RUH Usage Desc #002: RUH Attributes: Unused 00:07:24.179 RUH Usage Desc #003: RUH Attributes: Unused 00:07:24.179 RUH Usage Desc #004: RUH Attributes: Unused 00:07:24.179 RUH Usage Desc #005: RUH Attributes: Unused 00:07:24.179 RUH Usage Desc #006: RUH Attributes: Unused 00:07:24.179 RUH Usage Desc #007: RUH Attributes: Unused 00:07:24.179 00:07:24.179 FDP statistics log page 00:07:24.179 ======================= 00:07:24.179 Host bytes with metadata written: 484089856 00:07:24.179 Media bytes with metadata written: 484143104 00:07:24.179 Media bytes erased: 0 00:07:24.179 00:07:24.179 FDP events log page 00:07:24.179 =================== 00:07:24.179 Number of FDP events: 0 00:07:24.179 00:07:24.179 NVM Specific Namespace Data 00:07:24.179 =========================== 00:07:24.179 Logical Block Storage Tag Mask: 0 00:07:24.179 Protection Information Capabilities: 00:07:24.179 16b Guard Protection Information Storage Tag Support: No 00:07:24.179 16b Guard Protection Information Storage Tag Mask: Any bit in LBSTM can be 0 00:07:24.179 Storage Tag Check Read Support: No 00:07:24.179 Extended LBA Format #00: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.179 Extended LBA Format #01: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.179 Extended LBA Format #02: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.179 Extended LBA Format #03: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.179 Extended LBA Format #04: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.179 Extended LBA Format #05: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.179 Extended LBA Format #06: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.179 Extended LBA Format #07: Storage Tag Size: 0 , Protection Information Format: 16b Guard PI 00:07:24.179 00:07:24.179 real 0m1.210s 00:07:24.179 user 0m0.444s 00:07:24.179 sys 0m0.523s 00:07:24.179 02:02:48 nvme.nvme_identify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:24.179 02:02:48 nvme.nvme_identify -- common/autotest_common.sh@10 -- # set +x 00:07:24.179 ************************************ 00:07:24.179 END TEST nvme_identify 00:07:24.179 ************************************ 00:07:24.179 02:02:48 nvme -- nvme/nvme.sh@86 -- # run_test nvme_perf nvme_perf 00:07:24.179 02:02:48 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:24.179 02:02:48 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:24.179 02:02:48 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:24.179 ************************************ 00:07:24.179 START TEST nvme_perf 00:07:24.179 ************************************ 00:07:24.179 02:02:48 nvme.nvme_perf -- common/autotest_common.sh@1129 -- # nvme_perf 00:07:24.179 02:02:48 nvme.nvme_perf -- nvme/nvme.sh@22 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w read -o 12288 -t 1 -LL -i 0 -N 00:07:25.558 Initializing NVMe Controllers 00:07:25.558 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:25.558 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:25.558 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:25.558 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:25.558 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:25.558 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:25.558 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:25.558 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:25.558 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:25.558 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:25.558 Initialization complete. Launching workers. 00:07:25.558 ======================================================== 00:07:25.558 Latency(us) 00:07:25.558 Device Information : IOPS MiB/s Average min max 00:07:25.558 PCIE (0000:00:10.0) NSID 1 from core 0: 19129.11 224.17 6699.50 5418.66 33879.36 00:07:25.558 PCIE (0000:00:11.0) NSID 1 from core 0: 19129.11 224.17 6690.51 5499.67 32983.35 00:07:25.558 PCIE (0000:00:13.0) NSID 1 from core 0: 19129.11 224.17 6680.40 5526.85 31997.03 00:07:25.558 PCIE (0000:00:12.0) NSID 1 from core 0: 19129.11 224.17 6670.17 5508.87 30969.70 00:07:25.558 PCIE (0000:00:12.0) NSID 2 from core 0: 19129.11 224.17 6659.65 5527.48 29754.71 00:07:25.558 PCIE (0000:00:12.0) NSID 3 from core 0: 19193.09 224.92 6628.12 5503.39 21200.30 00:07:25.558 ======================================================== 00:07:25.558 Total : 114838.66 1345.77 6671.37 5418.66 33879.36 00:07:25.558 00:07:25.558 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:25.558 ================================================================================= 00:07:25.558 1.00000% : 5570.560us 00:07:25.558 10.00000% : 5772.209us 00:07:25.558 25.00000% : 5948.652us 00:07:25.558 50.00000% : 6276.332us 00:07:25.558 75.00000% : 6604.012us 00:07:25.558 90.00000% : 7662.671us 00:07:25.558 95.00000% : 9074.215us 00:07:25.558 98.00000% : 11846.892us 00:07:25.558 99.00000% : 14115.446us 00:07:25.558 99.50000% : 27827.594us 00:07:25.558 99.90000% : 33272.123us 00:07:25.558 99.99000% : 33877.071us 00:07:25.558 99.99900% : 34078.720us 00:07:25.558 99.99990% : 34078.720us 00:07:25.558 99.99999% : 34078.720us 00:07:25.558 00:07:25.558 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:25.558 ================================================================================= 00:07:25.558 1.00000% : 5620.972us 00:07:25.558 10.00000% : 5822.622us 00:07:25.558 25.00000% : 5999.065us 00:07:25.558 50.00000% : 6251.126us 00:07:25.558 75.00000% : 6553.600us 00:07:25.558 90.00000% : 7662.671us 00:07:25.558 95.00000% : 9074.215us 00:07:25.558 98.00000% : 11695.655us 00:07:25.558 99.00000% : 14821.218us 00:07:25.558 99.50000% : 26214.400us 00:07:25.558 99.90000% : 32465.526us 00:07:25.558 99.99000% : 33070.474us 00:07:25.558 99.99900% : 33070.474us 00:07:25.558 99.99990% : 33070.474us 00:07:25.558 99.99999% : 33070.474us 00:07:25.558 00:07:25.558 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:25.558 ================================================================================= 00:07:25.558 1.00000% : 5620.972us 00:07:25.558 10.00000% : 5822.622us 00:07:25.558 25.00000% : 5999.065us 00:07:25.558 50.00000% : 6251.126us 00:07:25.558 75.00000% : 6553.600us 00:07:25.558 90.00000% : 7713.083us 00:07:25.558 95.00000% : 9124.628us 00:07:25.558 98.00000% : 11494.006us 00:07:25.558 99.00000% : 14014.622us 00:07:25.558 99.50000% : 24802.855us 00:07:25.558 99.90000% : 31457.280us 00:07:25.558 99.99000% : 32062.228us 00:07:25.558 99.99900% : 32062.228us 00:07:25.558 99.99990% : 32062.228us 00:07:25.558 99.99999% : 32062.228us 00:07:25.558 00:07:25.558 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:25.558 ================================================================================= 00:07:25.558 1.00000% : 5620.972us 00:07:25.558 10.00000% : 5822.622us 00:07:25.558 25.00000% : 5999.065us 00:07:25.558 50.00000% : 6251.126us 00:07:25.558 75.00000% : 6553.600us 00:07:25.558 90.00000% : 7612.258us 00:07:25.558 95.00000% : 9124.628us 00:07:25.558 98.00000% : 11292.357us 00:07:25.558 99.00000% : 14115.446us 00:07:25.558 99.50000% : 22988.012us 00:07:25.558 99.90000% : 30449.034us 00:07:25.558 99.99000% : 31053.982us 00:07:25.558 99.99900% : 31053.982us 00:07:25.558 99.99990% : 31053.982us 00:07:25.558 99.99999% : 31053.982us 00:07:25.558 00:07:25.558 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:25.558 ================================================================================= 00:07:25.558 1.00000% : 5646.178us 00:07:25.558 10.00000% : 5822.622us 00:07:25.558 25.00000% : 5999.065us 00:07:25.558 50.00000% : 6251.126us 00:07:25.558 75.00000% : 6553.600us 00:07:25.558 90.00000% : 7561.846us 00:07:25.558 95.00000% : 9175.040us 00:07:25.558 98.00000% : 11393.182us 00:07:25.558 99.00000% : 14115.446us 00:07:25.558 99.50000% : 21173.169us 00:07:25.558 99.90000% : 29440.788us 00:07:25.558 99.99000% : 29844.086us 00:07:25.558 99.99900% : 29844.086us 00:07:25.558 99.99990% : 29844.086us 00:07:25.558 99.99999% : 29844.086us 00:07:25.558 00:07:25.558 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:25.558 ================================================================================= 00:07:25.558 1.00000% : 5620.972us 00:07:25.558 10.00000% : 5822.622us 00:07:25.558 25.00000% : 5999.065us 00:07:25.558 50.00000% : 6251.126us 00:07:25.558 75.00000% : 6553.600us 00:07:25.558 90.00000% : 7662.671us 00:07:25.558 95.00000% : 9225.452us 00:07:25.558 98.00000% : 11645.243us 00:07:25.558 99.00000% : 14014.622us 00:07:25.558 99.50000% : 16131.938us 00:07:25.558 99.90000% : 20769.871us 00:07:25.558 99.99000% : 21273.994us 00:07:25.558 99.99900% : 21273.994us 00:07:25.558 99.99990% : 21273.994us 00:07:25.558 99.99999% : 21273.994us 00:07:25.558 00:07:25.558 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:25.558 ============================================================================== 00:07:25.558 Range in us Cumulative IO count 00:07:25.558 5394.117 - 5419.323: 0.0052% ( 1) 00:07:25.558 5419.323 - 5444.529: 0.0209% ( 3) 00:07:25.558 5444.529 - 5469.735: 0.0470% ( 5) 00:07:25.558 5469.735 - 5494.942: 0.1934% ( 28) 00:07:25.558 5494.942 - 5520.148: 0.4964% ( 58) 00:07:25.558 5520.148 - 5545.354: 0.9772% ( 92) 00:07:25.558 5545.354 - 5570.560: 1.6409% ( 127) 00:07:25.558 5570.560 - 5595.766: 2.4143% ( 148) 00:07:25.558 5595.766 - 5620.972: 3.1982% ( 150) 00:07:25.558 5620.972 - 5646.178: 4.1388% ( 180) 00:07:25.558 5646.178 - 5671.385: 5.2832% ( 219) 00:07:25.558 5671.385 - 5696.591: 6.8092% ( 292) 00:07:25.558 5696.591 - 5721.797: 8.2358% ( 273) 00:07:25.558 5721.797 - 5747.003: 9.9133% ( 321) 00:07:25.558 5747.003 - 5772.209: 11.6639% ( 335) 00:07:25.558 5772.209 - 5797.415: 13.4720% ( 346) 00:07:25.558 5797.415 - 5822.622: 15.4421% ( 377) 00:07:25.558 5822.622 - 5847.828: 17.2502% ( 346) 00:07:25.558 5847.828 - 5873.034: 19.1785% ( 369) 00:07:25.558 5873.034 - 5898.240: 21.2270% ( 392) 00:07:25.558 5898.240 - 5923.446: 22.9933% ( 338) 00:07:25.558 5923.446 - 5948.652: 25.0209% ( 388) 00:07:25.558 5948.652 - 5973.858: 26.9806% ( 375) 00:07:25.558 5973.858 - 5999.065: 29.0291% ( 392) 00:07:25.558 5999.065 - 6024.271: 30.9992% ( 377) 00:07:25.558 6024.271 - 6049.477: 32.9327% ( 370) 00:07:25.558 6049.477 - 6074.683: 34.9812% ( 392) 00:07:25.558 6074.683 - 6099.889: 36.9408% ( 375) 00:07:25.558 6099.889 - 6125.095: 38.9423% ( 383) 00:07:25.558 6125.095 - 6150.302: 41.0378% ( 401) 00:07:25.558 6150.302 - 6175.508: 43.0132% ( 378) 00:07:25.558 6175.508 - 6200.714: 45.0773% ( 395) 00:07:25.558 6200.714 - 6225.920: 47.0579% ( 379) 00:07:25.558 6225.920 - 6251.126: 49.2370% ( 417) 00:07:25.558 6251.126 - 6276.332: 51.1967% ( 375) 00:07:25.558 6276.332 - 6301.538: 53.2922% ( 401) 00:07:25.558 6301.538 - 6326.745: 55.3982% ( 403) 00:07:25.558 6326.745 - 6351.951: 57.4519% ( 393) 00:07:25.558 6351.951 - 6377.157: 59.5318% ( 398) 00:07:25.558 6377.157 - 6402.363: 61.6691% ( 409) 00:07:25.558 6402.363 - 6427.569: 63.7908% ( 406) 00:07:25.558 6427.569 - 6452.775: 66.0013% ( 423) 00:07:25.558 6452.775 - 6503.188: 69.7952% ( 726) 00:07:25.558 6503.188 - 6553.600: 73.2023% ( 652) 00:07:25.558 6553.600 - 6604.012: 76.2281% ( 579) 00:07:25.558 6604.012 - 6654.425: 79.0134% ( 533) 00:07:25.559 6654.425 - 6704.837: 81.0723% ( 394) 00:07:25.559 6704.837 - 6755.249: 82.5146% ( 276) 00:07:25.559 6755.249 - 6805.662: 83.4239% ( 174) 00:07:25.559 6805.662 - 6856.074: 84.0196% ( 114) 00:07:25.559 6856.074 - 6906.486: 84.5684% ( 105) 00:07:25.559 6906.486 - 6956.898: 85.1223% ( 106) 00:07:25.559 6956.898 - 7007.311: 85.6396% ( 99) 00:07:25.559 7007.311 - 7057.723: 86.1570% ( 99) 00:07:25.559 7057.723 - 7108.135: 86.5750% ( 80) 00:07:25.559 7108.135 - 7158.548: 86.9983% ( 81) 00:07:25.559 7158.548 - 7208.960: 87.3694% ( 71) 00:07:25.559 7208.960 - 7259.372: 87.7717% ( 77) 00:07:25.559 7259.372 - 7309.785: 88.1219% ( 67) 00:07:25.559 7309.785 - 7360.197: 88.4511% ( 63) 00:07:25.559 7360.197 - 7410.609: 88.7803% ( 63) 00:07:25.559 7410.609 - 7461.022: 89.0782% ( 57) 00:07:25.559 7461.022 - 7511.434: 89.3969% ( 61) 00:07:25.559 7511.434 - 7561.846: 89.6844% ( 55) 00:07:25.559 7561.846 - 7612.258: 89.9927% ( 59) 00:07:25.559 7612.258 - 7662.671: 90.2435% ( 48) 00:07:25.559 7662.671 - 7713.083: 90.4996% ( 49) 00:07:25.559 7713.083 - 7763.495: 90.7661% ( 51) 00:07:25.559 7763.495 - 7813.908: 91.0796% ( 60) 00:07:25.559 7813.908 - 7864.320: 91.3357% ( 49) 00:07:25.559 7864.320 - 7914.732: 91.6022% ( 51) 00:07:25.559 7914.732 - 7965.145: 91.8060% ( 39) 00:07:25.559 7965.145 - 8015.557: 91.9785% ( 33) 00:07:25.559 8015.557 - 8065.969: 92.1405% ( 31) 00:07:25.559 8065.969 - 8116.382: 92.2868% ( 28) 00:07:25.559 8116.382 - 8166.794: 92.4488% ( 31) 00:07:25.559 8166.794 - 8217.206: 92.5951% ( 28) 00:07:25.559 8217.206 - 8267.618: 92.7467% ( 29) 00:07:25.559 8267.618 - 8318.031: 92.8721% ( 24) 00:07:25.559 8318.031 - 8368.443: 93.0132% ( 27) 00:07:25.559 8368.443 - 8418.855: 93.1961% ( 35) 00:07:25.559 8418.855 - 8469.268: 93.3319% ( 26) 00:07:25.559 8469.268 - 8519.680: 93.4521% ( 23) 00:07:25.559 8519.680 - 8570.092: 93.5776% ( 24) 00:07:25.559 8570.092 - 8620.505: 93.7343% ( 30) 00:07:25.559 8620.505 - 8670.917: 93.9015% ( 32) 00:07:25.559 8670.917 - 8721.329: 94.0844% ( 35) 00:07:25.559 8721.329 - 8771.742: 94.2569% ( 33) 00:07:25.559 8771.742 - 8822.154: 94.3823% ( 24) 00:07:25.559 8822.154 - 8872.566: 94.5025% ( 23) 00:07:25.559 8872.566 - 8922.978: 94.6332% ( 25) 00:07:25.559 8922.978 - 8973.391: 94.7481% ( 22) 00:07:25.559 8973.391 - 9023.803: 94.8892% ( 27) 00:07:25.559 9023.803 - 9074.215: 95.0146% ( 24) 00:07:25.559 9074.215 - 9124.628: 95.1453% ( 25) 00:07:25.559 9124.628 - 9175.040: 95.2446% ( 19) 00:07:25.559 9175.040 - 9225.452: 95.3700% ( 24) 00:07:25.559 9225.452 - 9275.865: 95.4954% ( 24) 00:07:25.559 9275.865 - 9326.277: 95.6051% ( 21) 00:07:25.559 9326.277 - 9376.689: 95.6888% ( 16) 00:07:25.559 9376.689 - 9427.102: 95.7880% ( 19) 00:07:25.559 9427.102 - 9477.514: 95.8717% ( 16) 00:07:25.559 9477.514 - 9527.926: 95.9762% ( 20) 00:07:25.559 9527.926 - 9578.338: 96.0807% ( 20) 00:07:25.559 9578.338 - 9628.751: 96.1800% ( 19) 00:07:25.559 9628.751 - 9679.163: 96.2427% ( 12) 00:07:25.559 9679.163 - 9729.575: 96.3002% ( 11) 00:07:25.559 9729.575 - 9779.988: 96.3524% ( 10) 00:07:25.559 9779.988 - 9830.400: 96.3890% ( 7) 00:07:25.559 9830.400 - 9880.812: 96.4256% ( 7) 00:07:25.559 9880.812 - 9931.225: 96.4674% ( 8) 00:07:25.559 9931.225 - 9981.637: 96.5040% ( 7) 00:07:25.559 9981.637 - 10032.049: 96.5510% ( 9) 00:07:25.559 10032.049 - 10082.462: 96.5876% ( 7) 00:07:25.559 10082.462 - 10132.874: 96.6294% ( 8) 00:07:25.559 10132.874 - 10183.286: 96.6607% ( 6) 00:07:25.559 10183.286 - 10233.698: 96.7130% ( 10) 00:07:25.559 10233.698 - 10284.111: 96.7444% ( 6) 00:07:25.559 10284.111 - 10334.523: 96.7809% ( 7) 00:07:25.559 10334.523 - 10384.935: 96.8227% ( 8) 00:07:25.559 10384.935 - 10435.348: 96.8593% ( 7) 00:07:25.559 10435.348 - 10485.760: 96.9116% ( 10) 00:07:25.559 10485.760 - 10536.172: 96.9482% ( 7) 00:07:25.559 10536.172 - 10586.585: 96.9900% ( 8) 00:07:25.559 10586.585 - 10636.997: 97.0318% ( 8) 00:07:25.559 10636.997 - 10687.409: 97.0474% ( 3) 00:07:25.559 10687.409 - 10737.822: 97.0684% ( 4) 00:07:25.559 10737.822 - 10788.234: 97.0788% ( 2) 00:07:25.559 10788.234 - 10838.646: 97.0945% ( 3) 00:07:25.559 10838.646 - 10889.058: 97.1102% ( 3) 00:07:25.559 10889.058 - 10939.471: 97.1467% ( 7) 00:07:25.559 10939.471 - 10989.883: 97.1885% ( 8) 00:07:25.559 10989.883 - 11040.295: 97.2513% ( 12) 00:07:25.559 11040.295 - 11090.708: 97.2931% ( 8) 00:07:25.559 11090.708 - 11141.120: 97.3349% ( 8) 00:07:25.559 11141.120 - 11191.532: 97.3819% ( 9) 00:07:25.559 11191.532 - 11241.945: 97.4342% ( 10) 00:07:25.559 11241.945 - 11292.357: 97.4864% ( 10) 00:07:25.559 11292.357 - 11342.769: 97.5334% ( 9) 00:07:25.559 11342.769 - 11393.182: 97.5857% ( 10) 00:07:25.559 11393.182 - 11443.594: 97.6432% ( 11) 00:07:25.559 11443.594 - 11494.006: 97.6902% ( 9) 00:07:25.559 11494.006 - 11544.418: 97.7477% ( 11) 00:07:25.559 11544.418 - 11594.831: 97.8104% ( 12) 00:07:25.559 11594.831 - 11645.243: 97.8522% ( 8) 00:07:25.559 11645.243 - 11695.655: 97.8836% ( 6) 00:07:25.559 11695.655 - 11746.068: 97.9306% ( 9) 00:07:25.559 11746.068 - 11796.480: 97.9672% ( 7) 00:07:25.559 11796.480 - 11846.892: 98.0299% ( 12) 00:07:25.559 11846.892 - 11897.305: 98.0821% ( 10) 00:07:25.559 11897.305 - 11947.717: 98.1344% ( 10) 00:07:25.559 11947.717 - 11998.129: 98.1762% ( 8) 00:07:25.559 11998.129 - 12048.542: 98.2128% ( 7) 00:07:25.559 12048.542 - 12098.954: 98.2441% ( 6) 00:07:25.559 12098.954 - 12149.366: 98.2807% ( 7) 00:07:25.559 12149.366 - 12199.778: 98.3173% ( 7) 00:07:25.559 12199.778 - 12250.191: 98.3539% ( 7) 00:07:25.559 12250.191 - 12300.603: 98.3905% ( 7) 00:07:25.559 12300.603 - 12351.015: 98.4166% ( 5) 00:07:25.559 12351.015 - 12401.428: 98.4532% ( 7) 00:07:25.559 12401.428 - 12451.840: 98.4950% ( 8) 00:07:25.559 12451.840 - 12502.252: 98.5263% ( 6) 00:07:25.559 12502.252 - 12552.665: 98.5681% ( 8) 00:07:25.559 12552.665 - 12603.077: 98.5943% ( 5) 00:07:25.559 12603.077 - 12653.489: 98.6413% ( 9) 00:07:25.559 12653.489 - 12703.902: 98.6727% ( 6) 00:07:25.559 12703.902 - 12754.314: 98.7040% ( 6) 00:07:25.559 12754.314 - 12804.726: 98.7458% ( 8) 00:07:25.559 12804.726 - 12855.138: 98.7615% ( 3) 00:07:25.559 12855.138 - 12905.551: 98.7929% ( 6) 00:07:25.559 12905.551 - 13006.375: 98.8190% ( 5) 00:07:25.559 13006.375 - 13107.200: 98.8556% ( 7) 00:07:25.559 13107.200 - 13208.025: 98.8921% ( 7) 00:07:25.559 13208.025 - 13308.849: 98.9339% ( 8) 00:07:25.559 13308.849 - 13409.674: 98.9496% ( 3) 00:07:25.559 13409.674 - 13510.498: 98.9653% ( 3) 00:07:25.559 13510.498 - 13611.323: 98.9862% ( 4) 00:07:25.559 13611.323 - 13712.148: 98.9967% ( 2) 00:07:25.559 14014.622 - 14115.446: 99.0123% ( 3) 00:07:25.559 14115.446 - 14216.271: 99.0332% ( 4) 00:07:25.559 14216.271 - 14317.095: 99.0489% ( 3) 00:07:25.559 14317.095 - 14417.920: 99.0646% ( 3) 00:07:25.559 14417.920 - 14518.745: 99.0803% ( 3) 00:07:25.559 14518.745 - 14619.569: 99.1012% ( 4) 00:07:25.559 14619.569 - 14720.394: 99.1221% ( 4) 00:07:25.559 14720.394 - 14821.218: 99.1378% ( 3) 00:07:25.559 14821.218 - 14922.043: 99.1534% ( 3) 00:07:25.559 14922.043 - 15022.868: 99.1743% ( 4) 00:07:25.559 15022.868 - 15123.692: 99.1900% ( 3) 00:07:25.559 15123.692 - 15224.517: 99.2109% ( 4) 00:07:25.559 15224.517 - 15325.342: 99.2266% ( 3) 00:07:25.559 15325.342 - 15426.166: 99.2423% ( 3) 00:07:25.559 15426.166 - 15526.991: 99.2632% ( 4) 00:07:25.559 15526.991 - 15627.815: 99.2788% ( 3) 00:07:25.559 15627.815 - 15728.640: 99.2997% ( 4) 00:07:25.559 15728.640 - 15829.465: 99.3154% ( 3) 00:07:25.559 15829.465 - 15930.289: 99.3311% ( 3) 00:07:25.559 26819.348 - 27020.997: 99.3572% ( 5) 00:07:25.559 27020.997 - 27222.646: 99.3990% ( 8) 00:07:25.559 27222.646 - 27424.295: 99.4408% ( 8) 00:07:25.559 27424.295 - 27625.945: 99.4774% ( 7) 00:07:25.559 27625.945 - 27827.594: 99.5245% ( 9) 00:07:25.559 27827.594 - 28029.243: 99.5663% ( 8) 00:07:25.559 28029.243 - 28230.892: 99.6081% ( 8) 00:07:25.559 28230.892 - 28432.542: 99.6499% ( 8) 00:07:25.559 28432.542 - 28634.191: 99.6656% ( 3) 00:07:25.559 31255.631 - 31457.280: 99.6708% ( 1) 00:07:25.559 31457.280 - 31658.929: 99.7021% ( 6) 00:07:25.559 31658.929 - 31860.578: 99.7283% ( 5) 00:07:25.559 31860.578 - 32062.228: 99.7492% ( 4) 00:07:25.559 32062.228 - 32263.877: 99.7805% ( 6) 00:07:25.559 32263.877 - 32465.526: 99.8066% ( 5) 00:07:25.559 32465.526 - 32667.175: 99.8328% ( 5) 00:07:25.559 32667.175 - 32868.825: 99.8589% ( 5) 00:07:25.559 32868.825 - 33070.474: 99.8850% ( 5) 00:07:25.559 33070.474 - 33272.123: 99.9112% ( 5) 00:07:25.559 33272.123 - 33473.772: 99.9425% ( 6) 00:07:25.559 33473.772 - 33675.422: 99.9686% ( 5) 00:07:25.559 33675.422 - 33877.071: 99.9948% ( 5) 00:07:25.559 33877.071 - 34078.720: 100.0000% ( 1) 00:07:25.559 00:07:25.559 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:25.559 ============================================================================== 00:07:25.559 Range in us Cumulative IO count 00:07:25.559 5494.942 - 5520.148: 0.0366% ( 7) 00:07:25.559 5520.148 - 5545.354: 0.0836% ( 9) 00:07:25.559 5545.354 - 5570.560: 0.2561% ( 33) 00:07:25.559 5570.560 - 5595.766: 0.6898% ( 83) 00:07:25.559 5595.766 - 5620.972: 1.3117% ( 119) 00:07:25.559 5620.972 - 5646.178: 1.9649% ( 125) 00:07:25.559 5646.178 - 5671.385: 2.6965% ( 140) 00:07:25.559 5671.385 - 5696.591: 3.5901% ( 171) 00:07:25.559 5696.591 - 5721.797: 4.5725% ( 188) 00:07:25.559 5721.797 - 5747.003: 5.9835% ( 270) 00:07:25.559 5747.003 - 5772.209: 7.5146% ( 293) 00:07:25.559 5772.209 - 5797.415: 9.2339% ( 329) 00:07:25.559 5797.415 - 5822.622: 11.0472% ( 347) 00:07:25.559 5822.622 - 5847.828: 13.1428% ( 401) 00:07:25.559 5847.828 - 5873.034: 15.3324% ( 419) 00:07:25.559 5873.034 - 5898.240: 17.6317% ( 440) 00:07:25.559 5898.240 - 5923.446: 19.9728% ( 448) 00:07:25.560 5923.446 - 5948.652: 22.2147% ( 429) 00:07:25.560 5948.652 - 5973.858: 24.5506% ( 447) 00:07:25.560 5973.858 - 5999.065: 26.9858% ( 466) 00:07:25.560 5999.065 - 6024.271: 29.4576% ( 473) 00:07:25.560 6024.271 - 6049.477: 31.8301% ( 454) 00:07:25.560 6049.477 - 6074.683: 34.1816% ( 450) 00:07:25.560 6074.683 - 6099.889: 36.5750% ( 458) 00:07:25.560 6099.889 - 6125.095: 38.9841% ( 461) 00:07:25.560 6125.095 - 6150.302: 41.3723% ( 457) 00:07:25.560 6150.302 - 6175.508: 43.7186% ( 449) 00:07:25.560 6175.508 - 6200.714: 46.1643% ( 468) 00:07:25.560 6200.714 - 6225.920: 48.6099% ( 468) 00:07:25.560 6225.920 - 6251.126: 50.9824% ( 454) 00:07:25.560 6251.126 - 6276.332: 53.3967% ( 462) 00:07:25.560 6276.332 - 6301.538: 55.7901% ( 458) 00:07:25.560 6301.538 - 6326.745: 58.1783% ( 457) 00:07:25.560 6326.745 - 6351.951: 60.5717% ( 458) 00:07:25.560 6351.951 - 6377.157: 62.9651% ( 458) 00:07:25.560 6377.157 - 6402.363: 65.2696% ( 441) 00:07:25.560 6402.363 - 6427.569: 67.4331% ( 414) 00:07:25.560 6427.569 - 6452.775: 69.4764% ( 391) 00:07:25.560 6452.775 - 6503.188: 73.1031% ( 694) 00:07:25.560 6503.188 - 6553.600: 76.4841% ( 647) 00:07:25.560 6553.600 - 6604.012: 79.2485% ( 529) 00:07:25.560 6604.012 - 6654.425: 81.0671% ( 348) 00:07:25.560 6654.425 - 6704.837: 82.3108% ( 238) 00:07:25.560 6704.837 - 6755.249: 83.0790% ( 147) 00:07:25.560 6755.249 - 6805.662: 83.6904% ( 117) 00:07:25.560 6805.662 - 6856.074: 84.2444% ( 106) 00:07:25.560 6856.074 - 6906.486: 84.7826% ( 103) 00:07:25.560 6906.486 - 6956.898: 85.2372% ( 87) 00:07:25.560 6956.898 - 7007.311: 85.6396% ( 77) 00:07:25.560 7007.311 - 7057.723: 86.0263% ( 74) 00:07:25.560 7057.723 - 7108.135: 86.4496% ( 81) 00:07:25.560 7108.135 - 7158.548: 86.7841% ( 64) 00:07:25.560 7158.548 - 7208.960: 87.0976% ( 60) 00:07:25.560 7208.960 - 7259.372: 87.4425% ( 66) 00:07:25.560 7259.372 - 7309.785: 87.8397% ( 76) 00:07:25.560 7309.785 - 7360.197: 88.2055% ( 70) 00:07:25.560 7360.197 - 7410.609: 88.5661% ( 69) 00:07:25.560 7410.609 - 7461.022: 88.8953% ( 63) 00:07:25.560 7461.022 - 7511.434: 89.2140% ( 61) 00:07:25.560 7511.434 - 7561.846: 89.4962% ( 54) 00:07:25.560 7561.846 - 7612.258: 89.8464% ( 67) 00:07:25.560 7612.258 - 7662.671: 90.1756% ( 63) 00:07:25.560 7662.671 - 7713.083: 90.4891% ( 60) 00:07:25.560 7713.083 - 7763.495: 90.7713% ( 54) 00:07:25.560 7763.495 - 7813.908: 91.0692% ( 57) 00:07:25.560 7813.908 - 7864.320: 91.3514% ( 54) 00:07:25.560 7864.320 - 7914.732: 91.6336% ( 54) 00:07:25.560 7914.732 - 7965.145: 91.8949% ( 50) 00:07:25.560 7965.145 - 8015.557: 92.1561% ( 50) 00:07:25.560 8015.557 - 8065.969: 92.4279% ( 52) 00:07:25.560 8065.969 - 8116.382: 92.6735% ( 47) 00:07:25.560 8116.382 - 8166.794: 92.8773% ( 39) 00:07:25.560 8166.794 - 8217.206: 93.0497% ( 33) 00:07:25.560 8217.206 - 8267.618: 93.1908% ( 27) 00:07:25.560 8267.618 - 8318.031: 93.3372% ( 28) 00:07:25.560 8318.031 - 8368.443: 93.4887% ( 29) 00:07:25.560 8368.443 - 8418.855: 93.6089% ( 23) 00:07:25.560 8418.855 - 8469.268: 93.7239% ( 22) 00:07:25.560 8469.268 - 8519.680: 93.8232% ( 19) 00:07:25.560 8519.680 - 8570.092: 93.9329% ( 21) 00:07:25.560 8570.092 - 8620.505: 94.0374% ( 20) 00:07:25.560 8620.505 - 8670.917: 94.1576% ( 23) 00:07:25.560 8670.917 - 8721.329: 94.2621% ( 20) 00:07:25.560 8721.329 - 8771.742: 94.3562% ( 18) 00:07:25.560 8771.742 - 8822.154: 94.4555% ( 19) 00:07:25.560 8822.154 - 8872.566: 94.5652% ( 21) 00:07:25.560 8872.566 - 8922.978: 94.6750% ( 21) 00:07:25.560 8922.978 - 8973.391: 94.7847% ( 21) 00:07:25.560 8973.391 - 9023.803: 94.8840% ( 19) 00:07:25.560 9023.803 - 9074.215: 95.0355% ( 29) 00:07:25.560 9074.215 - 9124.628: 95.1714% ( 26) 00:07:25.560 9124.628 - 9175.040: 95.2707% ( 19) 00:07:25.560 9175.040 - 9225.452: 95.3752% ( 20) 00:07:25.560 9225.452 - 9275.865: 95.4745% ( 19) 00:07:25.560 9275.865 - 9326.277: 95.5686% ( 18) 00:07:25.560 9326.277 - 9376.689: 95.6731% ( 20) 00:07:25.560 9376.689 - 9427.102: 95.7828% ( 21) 00:07:25.560 9427.102 - 9477.514: 95.8821% ( 19) 00:07:25.560 9477.514 - 9527.926: 95.9657% ( 16) 00:07:25.560 9527.926 - 9578.338: 96.0546% ( 17) 00:07:25.560 9578.338 - 9628.751: 96.1382% ( 16) 00:07:25.560 9628.751 - 9679.163: 96.2009% ( 12) 00:07:25.560 9679.163 - 9729.575: 96.2584% ( 11) 00:07:25.560 9729.575 - 9779.988: 96.3263% ( 13) 00:07:25.560 9779.988 - 9830.400: 96.3629% ( 7) 00:07:25.560 9830.400 - 9880.812: 96.3786% ( 3) 00:07:25.560 9880.812 - 9931.225: 96.3995% ( 4) 00:07:25.560 9931.225 - 9981.637: 96.4308% ( 6) 00:07:25.560 9981.637 - 10032.049: 96.4935% ( 12) 00:07:25.560 10032.049 - 10082.462: 96.5301% ( 7) 00:07:25.560 10082.462 - 10132.874: 96.5667% ( 7) 00:07:25.560 10132.874 - 10183.286: 96.5980% ( 6) 00:07:25.560 10183.286 - 10233.698: 96.6451% ( 9) 00:07:25.560 10233.698 - 10284.111: 96.7078% ( 12) 00:07:25.560 10284.111 - 10334.523: 96.7705% ( 12) 00:07:25.560 10334.523 - 10384.935: 96.8384% ( 13) 00:07:25.560 10384.935 - 10435.348: 96.9168% ( 15) 00:07:25.560 10435.348 - 10485.760: 96.9795% ( 12) 00:07:25.560 10485.760 - 10536.172: 97.0579% ( 15) 00:07:25.560 10536.172 - 10586.585: 97.1102% ( 10) 00:07:25.560 10586.585 - 10636.997: 97.1729% ( 12) 00:07:25.560 10636.997 - 10687.409: 97.2251% ( 10) 00:07:25.560 10687.409 - 10737.822: 97.2722% ( 9) 00:07:25.560 10737.822 - 10788.234: 97.3192% ( 9) 00:07:25.560 10788.234 - 10838.646: 97.3453% ( 5) 00:07:25.560 10838.646 - 10889.058: 97.3767% ( 6) 00:07:25.560 10889.058 - 10939.471: 97.4028% ( 5) 00:07:25.560 10939.471 - 10989.883: 97.4342% ( 6) 00:07:25.560 10989.883 - 11040.295: 97.4603% ( 5) 00:07:25.560 11040.295 - 11090.708: 97.5073% ( 9) 00:07:25.560 11090.708 - 11141.120: 97.5596% ( 10) 00:07:25.560 11141.120 - 11191.532: 97.6118% ( 10) 00:07:25.560 11191.532 - 11241.945: 97.6536% ( 8) 00:07:25.560 11241.945 - 11292.357: 97.6902% ( 7) 00:07:25.560 11292.357 - 11342.769: 97.7268% ( 7) 00:07:25.560 11342.769 - 11393.182: 97.7686% ( 8) 00:07:25.560 11393.182 - 11443.594: 97.8104% ( 8) 00:07:25.560 11443.594 - 11494.006: 97.8470% ( 7) 00:07:25.560 11494.006 - 11544.418: 97.8888% ( 8) 00:07:25.560 11544.418 - 11594.831: 97.9306% ( 8) 00:07:25.560 11594.831 - 11645.243: 97.9672% ( 7) 00:07:25.560 11645.243 - 11695.655: 98.0038% ( 7) 00:07:25.560 11695.655 - 11746.068: 98.0456% ( 8) 00:07:25.560 11746.068 - 11796.480: 98.0821% ( 7) 00:07:25.560 11796.480 - 11846.892: 98.1083% ( 5) 00:07:25.560 11846.892 - 11897.305: 98.1396% ( 6) 00:07:25.560 11897.305 - 11947.717: 98.1658% ( 5) 00:07:25.560 11947.717 - 11998.129: 98.1814% ( 3) 00:07:25.560 11998.129 - 12048.542: 98.2076% ( 5) 00:07:25.560 12048.542 - 12098.954: 98.2285% ( 4) 00:07:25.560 12098.954 - 12149.366: 98.2494% ( 4) 00:07:25.560 12149.366 - 12199.778: 98.2860% ( 7) 00:07:25.560 12199.778 - 12250.191: 98.3121% ( 5) 00:07:25.560 12250.191 - 12300.603: 98.3330% ( 4) 00:07:25.560 12300.603 - 12351.015: 98.3696% ( 7) 00:07:25.560 12351.015 - 12401.428: 98.3905% ( 4) 00:07:25.560 12401.428 - 12451.840: 98.4218% ( 6) 00:07:25.560 12451.840 - 12502.252: 98.4375% ( 3) 00:07:25.560 12502.252 - 12552.665: 98.4741% ( 7) 00:07:25.560 12552.665 - 12603.077: 98.5054% ( 6) 00:07:25.560 12603.077 - 12653.489: 98.5368% ( 6) 00:07:25.560 12653.489 - 12703.902: 98.5681% ( 6) 00:07:25.560 12703.902 - 12754.314: 98.5943% ( 5) 00:07:25.560 12754.314 - 12804.726: 98.6152% ( 4) 00:07:25.560 12804.726 - 12855.138: 98.6309% ( 3) 00:07:25.560 12855.138 - 12905.551: 98.6622% ( 6) 00:07:25.560 12905.551 - 13006.375: 98.7092% ( 9) 00:07:25.560 13006.375 - 13107.200: 98.7510% ( 8) 00:07:25.560 13107.200 - 13208.025: 98.8138% ( 12) 00:07:25.560 13208.025 - 13308.849: 98.8765% ( 12) 00:07:25.560 13308.849 - 13409.674: 98.9130% ( 7) 00:07:25.560 13409.674 - 13510.498: 98.9339% ( 4) 00:07:25.560 13510.498 - 13611.323: 98.9548% ( 4) 00:07:25.560 13611.323 - 13712.148: 98.9758% ( 4) 00:07:25.560 13712.148 - 13812.972: 98.9967% ( 4) 00:07:25.560 14720.394 - 14821.218: 99.0071% ( 2) 00:07:25.560 14821.218 - 14922.043: 99.0280% ( 4) 00:07:25.560 14922.043 - 15022.868: 99.0541% ( 5) 00:07:25.560 15022.868 - 15123.692: 99.0698% ( 3) 00:07:25.560 15123.692 - 15224.517: 99.0855% ( 3) 00:07:25.560 15224.517 - 15325.342: 99.1064% ( 4) 00:07:25.560 15325.342 - 15426.166: 99.1273% ( 4) 00:07:25.560 15426.166 - 15526.991: 99.1482% ( 4) 00:07:25.560 15526.991 - 15627.815: 99.1691% ( 4) 00:07:25.560 15627.815 - 15728.640: 99.1900% ( 4) 00:07:25.560 15728.640 - 15829.465: 99.2161% ( 5) 00:07:25.561 15829.465 - 15930.289: 99.2318% ( 3) 00:07:25.561 15930.289 - 16031.114: 99.2527% ( 4) 00:07:25.561 16031.114 - 16131.938: 99.2736% ( 4) 00:07:25.561 16131.938 - 16232.763: 99.2997% ( 5) 00:07:25.561 16232.763 - 16333.588: 99.3207% ( 4) 00:07:25.561 16333.588 - 16434.412: 99.3311% ( 2) 00:07:25.561 25206.154 - 25306.978: 99.3468% ( 3) 00:07:25.561 25306.978 - 25407.803: 99.3677% ( 4) 00:07:25.561 25407.803 - 25508.628: 99.3834% ( 3) 00:07:25.561 25508.628 - 25609.452: 99.4043% ( 4) 00:07:25.561 25609.452 - 25710.277: 99.4304% ( 5) 00:07:25.561 25710.277 - 25811.102: 99.4513% ( 4) 00:07:25.561 25811.102 - 26012.751: 99.4983% ( 9) 00:07:25.561 26012.751 - 26214.400: 99.5454% ( 9) 00:07:25.561 26214.400 - 26416.049: 99.5872% ( 8) 00:07:25.561 26416.049 - 26617.698: 99.6342% ( 9) 00:07:25.561 26617.698 - 26819.348: 99.6656% ( 6) 00:07:25.561 30650.683 - 30852.332: 99.6865% ( 4) 00:07:25.561 30852.332 - 31053.982: 99.7126% ( 5) 00:07:25.561 31053.982 - 31255.631: 99.7387% ( 5) 00:07:25.561 31255.631 - 31457.280: 99.7701% ( 6) 00:07:25.561 31457.280 - 31658.929: 99.8014% ( 6) 00:07:25.561 31658.929 - 31860.578: 99.8276% ( 5) 00:07:25.561 31860.578 - 32062.228: 99.8589% ( 6) 00:07:25.561 32062.228 - 32263.877: 99.8903% ( 6) 00:07:25.561 32263.877 - 32465.526: 99.9216% ( 6) 00:07:25.561 32465.526 - 32667.175: 99.9530% ( 6) 00:07:25.561 32667.175 - 32868.825: 99.9843% ( 6) 00:07:25.561 32868.825 - 33070.474: 100.0000% ( 3) 00:07:25.561 00:07:25.561 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:25.561 ============================================================================== 00:07:25.561 Range in us Cumulative IO count 00:07:25.561 5520.148 - 5545.354: 0.0470% ( 9) 00:07:25.561 5545.354 - 5570.560: 0.1829% ( 26) 00:07:25.561 5570.560 - 5595.766: 0.5957% ( 79) 00:07:25.561 5595.766 - 5620.972: 1.0817% ( 93) 00:07:25.561 5620.972 - 5646.178: 1.8081% ( 139) 00:07:25.561 5646.178 - 5671.385: 2.5502% ( 142) 00:07:25.561 5671.385 - 5696.591: 3.4229% ( 167) 00:07:25.561 5696.591 - 5721.797: 4.4523% ( 197) 00:07:25.561 5721.797 - 5747.003: 5.7274% ( 244) 00:07:25.561 5747.003 - 5772.209: 7.3422% ( 309) 00:07:25.561 5772.209 - 5797.415: 9.1033% ( 337) 00:07:25.561 5797.415 - 5822.622: 11.0838% ( 379) 00:07:25.561 5822.622 - 5847.828: 13.1271% ( 391) 00:07:25.561 5847.828 - 5873.034: 15.2278% ( 402) 00:07:25.561 5873.034 - 5898.240: 17.5899% ( 452) 00:07:25.561 5898.240 - 5923.446: 19.9885% ( 459) 00:07:25.561 5923.446 - 5948.652: 22.4028% ( 462) 00:07:25.561 5948.652 - 5973.858: 24.8066% ( 460) 00:07:25.561 5973.858 - 5999.065: 27.2105% ( 460) 00:07:25.561 5999.065 - 6024.271: 29.6196% ( 461) 00:07:25.561 6024.271 - 6049.477: 32.0077% ( 457) 00:07:25.561 6049.477 - 6074.683: 34.4482% ( 467) 00:07:25.561 6074.683 - 6099.889: 36.8677% ( 463) 00:07:25.561 6099.889 - 6125.095: 39.3081% ( 467) 00:07:25.561 6125.095 - 6150.302: 41.6858% ( 455) 00:07:25.561 6150.302 - 6175.508: 44.0426% ( 451) 00:07:25.561 6175.508 - 6200.714: 46.4517% ( 461) 00:07:25.561 6200.714 - 6225.920: 48.8242% ( 454) 00:07:25.561 6225.920 - 6251.126: 51.2385% ( 462) 00:07:25.561 6251.126 - 6276.332: 53.7051% ( 472) 00:07:25.561 6276.332 - 6301.538: 56.1403% ( 466) 00:07:25.561 6301.538 - 6326.745: 58.6643% ( 483) 00:07:25.561 6326.745 - 6351.951: 61.0995% ( 466) 00:07:25.561 6351.951 - 6377.157: 63.4615% ( 452) 00:07:25.561 6377.157 - 6402.363: 65.7765% ( 443) 00:07:25.561 6402.363 - 6427.569: 67.9923% ( 424) 00:07:25.561 6427.569 - 6452.775: 69.9728% ( 379) 00:07:25.561 6452.775 - 6503.188: 73.5943% ( 693) 00:07:25.561 6503.188 - 6553.600: 76.9283% ( 638) 00:07:25.561 6553.600 - 6604.012: 79.5882% ( 509) 00:07:25.561 6604.012 - 6654.425: 81.4172% ( 350) 00:07:25.561 6654.425 - 6704.837: 82.6348% ( 233) 00:07:25.561 6704.837 - 6755.249: 83.3926% ( 145) 00:07:25.561 6755.249 - 6805.662: 84.0040% ( 117) 00:07:25.561 6805.662 - 6856.074: 84.5684% ( 108) 00:07:25.561 6856.074 - 6906.486: 85.1066% ( 103) 00:07:25.561 6906.486 - 6956.898: 85.5403% ( 83) 00:07:25.561 6956.898 - 7007.311: 85.9584% ( 80) 00:07:25.561 7007.311 - 7057.723: 86.3242% ( 70) 00:07:25.561 7057.723 - 7108.135: 86.7475% ( 81) 00:07:25.561 7108.135 - 7158.548: 87.1133% ( 70) 00:07:25.561 7158.548 - 7208.960: 87.4268% ( 60) 00:07:25.561 7208.960 - 7259.372: 87.7717% ( 66) 00:07:25.561 7259.372 - 7309.785: 88.0957% ( 62) 00:07:25.561 7309.785 - 7360.197: 88.4197% ( 62) 00:07:25.561 7360.197 - 7410.609: 88.7019% ( 54) 00:07:25.561 7410.609 - 7461.022: 88.9475% ( 47) 00:07:25.561 7461.022 - 7511.434: 89.1827% ( 45) 00:07:25.561 7511.434 - 7561.846: 89.4544% ( 52) 00:07:25.561 7561.846 - 7612.258: 89.7000% ( 47) 00:07:25.561 7612.258 - 7662.671: 89.9509% ( 48) 00:07:25.561 7662.671 - 7713.083: 90.2644% ( 60) 00:07:25.561 7713.083 - 7763.495: 90.5466% ( 54) 00:07:25.561 7763.495 - 7813.908: 90.8184% ( 52) 00:07:25.561 7813.908 - 7864.320: 91.0692% ( 48) 00:07:25.561 7864.320 - 7914.732: 91.3462% ( 53) 00:07:25.561 7914.732 - 7965.145: 91.6022% ( 49) 00:07:25.561 7965.145 - 8015.557: 91.8844% ( 54) 00:07:25.561 8015.557 - 8065.969: 92.1248% ( 46) 00:07:25.561 8065.969 - 8116.382: 92.3809% ( 49) 00:07:25.561 8116.382 - 8166.794: 92.6056% ( 43) 00:07:25.561 8166.794 - 8217.206: 92.8146% ( 40) 00:07:25.561 8217.206 - 8267.618: 93.0132% ( 38) 00:07:25.561 8267.618 - 8318.031: 93.1490% ( 26) 00:07:25.561 8318.031 - 8368.443: 93.2954% ( 28) 00:07:25.561 8368.443 - 8418.855: 93.4365% ( 27) 00:07:25.561 8418.855 - 8469.268: 93.5828% ( 28) 00:07:25.561 8469.268 - 8519.680: 93.7239% ( 27) 00:07:25.561 8519.680 - 8570.092: 93.8597% ( 26) 00:07:25.561 8570.092 - 8620.505: 93.9852% ( 24) 00:07:25.561 8620.505 - 8670.917: 94.1106% ( 24) 00:07:25.561 8670.917 - 8721.329: 94.2151% ( 20) 00:07:25.561 8721.329 - 8771.742: 94.3353% ( 23) 00:07:25.561 8771.742 - 8822.154: 94.4450% ( 21) 00:07:25.561 8822.154 - 8872.566: 94.5548% ( 21) 00:07:25.561 8872.566 - 8922.978: 94.6645% ( 21) 00:07:25.561 8922.978 - 8973.391: 94.7742% ( 21) 00:07:25.561 8973.391 - 9023.803: 94.8735% ( 19) 00:07:25.561 9023.803 - 9074.215: 94.9728% ( 19) 00:07:25.561 9074.215 - 9124.628: 95.0512% ( 15) 00:07:25.561 9124.628 - 9175.040: 95.1348% ( 16) 00:07:25.561 9175.040 - 9225.452: 95.2289% ( 18) 00:07:25.561 9225.452 - 9275.865: 95.3125% ( 16) 00:07:25.561 9275.865 - 9326.277: 95.3961% ( 16) 00:07:25.561 9326.277 - 9376.689: 95.4693% ( 14) 00:07:25.561 9376.689 - 9427.102: 95.5372% ( 13) 00:07:25.561 9427.102 - 9477.514: 95.5999% ( 12) 00:07:25.561 9477.514 - 9527.926: 95.6679% ( 13) 00:07:25.561 9527.926 - 9578.338: 95.7358% ( 13) 00:07:25.561 9578.338 - 9628.751: 95.7985% ( 12) 00:07:25.561 9628.751 - 9679.163: 95.8612% ( 12) 00:07:25.561 9679.163 - 9729.575: 95.9239% ( 12) 00:07:25.561 9729.575 - 9779.988: 95.9971% ( 14) 00:07:25.561 9779.988 - 9830.400: 96.0650% ( 13) 00:07:25.561 9830.400 - 9880.812: 96.1382% ( 14) 00:07:25.561 9880.812 - 9931.225: 96.1957% ( 11) 00:07:25.561 9931.225 - 9981.637: 96.2479% ( 10) 00:07:25.561 9981.637 - 10032.049: 96.2949% ( 9) 00:07:25.561 10032.049 - 10082.462: 96.3524% ( 11) 00:07:25.561 10082.462 - 10132.874: 96.4099% ( 11) 00:07:25.561 10132.874 - 10183.286: 96.4674% ( 11) 00:07:25.561 10183.286 - 10233.698: 96.5249% ( 11) 00:07:25.561 10233.698 - 10284.111: 96.5824% ( 11) 00:07:25.561 10284.111 - 10334.523: 96.6398% ( 11) 00:07:25.561 10334.523 - 10384.935: 96.6921% ( 10) 00:07:25.561 10384.935 - 10435.348: 96.7600% ( 13) 00:07:25.561 10435.348 - 10485.760: 96.8280% ( 13) 00:07:25.561 10485.760 - 10536.172: 96.8698% ( 8) 00:07:25.562 10536.172 - 10586.585: 96.9220% ( 10) 00:07:25.562 10586.585 - 10636.997: 96.9847% ( 12) 00:07:25.562 10636.997 - 10687.409: 97.0422% ( 11) 00:07:25.562 10687.409 - 10737.822: 97.1102% ( 13) 00:07:25.562 10737.822 - 10788.234: 97.1833% ( 14) 00:07:25.562 10788.234 - 10838.646: 97.2460% ( 12) 00:07:25.562 10838.646 - 10889.058: 97.3087% ( 12) 00:07:25.562 10889.058 - 10939.471: 97.3767% ( 13) 00:07:25.562 10939.471 - 10989.883: 97.4551% ( 15) 00:07:25.562 10989.883 - 11040.295: 97.5230% ( 13) 00:07:25.562 11040.295 - 11090.708: 97.5909% ( 13) 00:07:25.562 11090.708 - 11141.120: 97.6589% ( 13) 00:07:25.562 11141.120 - 11191.532: 97.7268% ( 13) 00:07:25.562 11191.532 - 11241.945: 97.7791% ( 10) 00:07:25.562 11241.945 - 11292.357: 97.8418% ( 12) 00:07:25.562 11292.357 - 11342.769: 97.8940% ( 10) 00:07:25.562 11342.769 - 11393.182: 97.9358% ( 8) 00:07:25.562 11393.182 - 11443.594: 97.9776% ( 8) 00:07:25.562 11443.594 - 11494.006: 98.0142% ( 7) 00:07:25.562 11494.006 - 11544.418: 98.0560% ( 8) 00:07:25.562 11544.418 - 11594.831: 98.0978% ( 8) 00:07:25.562 11594.831 - 11645.243: 98.1240% ( 5) 00:07:25.562 11645.243 - 11695.655: 98.1501% ( 5) 00:07:25.562 11695.655 - 11746.068: 98.1710% ( 4) 00:07:25.562 11746.068 - 11796.480: 98.1867% ( 3) 00:07:25.562 11796.480 - 11846.892: 98.2128% ( 5) 00:07:25.562 11846.892 - 11897.305: 98.2285% ( 3) 00:07:25.562 11897.305 - 11947.717: 98.2389% ( 2) 00:07:25.562 11947.717 - 11998.129: 98.2494% ( 2) 00:07:25.562 11998.129 - 12048.542: 98.2598% ( 2) 00:07:25.562 12048.542 - 12098.954: 98.2651% ( 1) 00:07:25.562 12098.954 - 12149.366: 98.2755% ( 2) 00:07:25.562 12149.366 - 12199.778: 98.3069% ( 6) 00:07:25.562 12199.778 - 12250.191: 98.3487% ( 8) 00:07:25.562 12250.191 - 12300.603: 98.3852% ( 7) 00:07:25.562 12300.603 - 12351.015: 98.4270% ( 8) 00:07:25.562 12351.015 - 12401.428: 98.4741% ( 9) 00:07:25.562 12401.428 - 12451.840: 98.5054% ( 6) 00:07:25.562 12451.840 - 12502.252: 98.5316% ( 5) 00:07:25.562 12502.252 - 12552.665: 98.5629% ( 6) 00:07:25.562 12552.665 - 12603.077: 98.5890% ( 5) 00:07:25.562 12603.077 - 12653.489: 98.6204% ( 6) 00:07:25.562 12653.489 - 12703.902: 98.6518% ( 6) 00:07:25.562 12703.902 - 12754.314: 98.6831% ( 6) 00:07:25.562 12754.314 - 12804.726: 98.7145% ( 6) 00:07:25.562 12804.726 - 12855.138: 98.7406% ( 5) 00:07:25.562 12855.138 - 12905.551: 98.7667% ( 5) 00:07:25.562 12905.551 - 13006.375: 98.8242% ( 11) 00:07:25.562 13006.375 - 13107.200: 98.8556% ( 6) 00:07:25.562 13107.200 - 13208.025: 98.8817% ( 5) 00:07:25.562 13208.025 - 13308.849: 98.9026% ( 4) 00:07:25.562 13308.849 - 13409.674: 98.9235% ( 4) 00:07:25.562 13409.674 - 13510.498: 98.9444% ( 4) 00:07:25.562 13510.498 - 13611.323: 98.9653% ( 4) 00:07:25.562 13611.323 - 13712.148: 98.9862% ( 4) 00:07:25.562 13712.148 - 13812.972: 98.9967% ( 2) 00:07:25.562 13913.797 - 14014.622: 99.0071% ( 2) 00:07:25.562 14014.622 - 14115.446: 99.0176% ( 2) 00:07:25.562 14115.446 - 14216.271: 99.0332% ( 3) 00:07:25.562 14216.271 - 14317.095: 99.0541% ( 4) 00:07:25.562 14317.095 - 14417.920: 99.0698% ( 3) 00:07:25.562 14417.920 - 14518.745: 99.0855% ( 3) 00:07:25.562 14518.745 - 14619.569: 99.0959% ( 2) 00:07:25.562 14619.569 - 14720.394: 99.1116% ( 3) 00:07:25.562 14720.394 - 14821.218: 99.1273% ( 3) 00:07:25.562 14821.218 - 14922.043: 99.1325% ( 1) 00:07:25.562 14922.043 - 15022.868: 99.1482% ( 3) 00:07:25.562 15022.868 - 15123.692: 99.1639% ( 3) 00:07:25.562 15123.692 - 15224.517: 99.1691% ( 1) 00:07:25.562 15224.517 - 15325.342: 99.1848% ( 3) 00:07:25.562 15325.342 - 15426.166: 99.2057% ( 4) 00:07:25.562 15426.166 - 15526.991: 99.2214% ( 3) 00:07:25.562 15526.991 - 15627.815: 99.2370% ( 3) 00:07:25.562 15627.815 - 15728.640: 99.2527% ( 3) 00:07:25.562 15728.640 - 15829.465: 99.2684% ( 3) 00:07:25.562 15829.465 - 15930.289: 99.2841% ( 3) 00:07:25.562 15930.289 - 16031.114: 99.2997% ( 3) 00:07:25.562 16031.114 - 16131.938: 99.3154% ( 3) 00:07:25.562 16131.938 - 16232.763: 99.3311% ( 3) 00:07:25.562 23895.434 - 23996.258: 99.3468% ( 3) 00:07:25.562 23996.258 - 24097.083: 99.3677% ( 4) 00:07:25.562 24097.083 - 24197.908: 99.3938% ( 5) 00:07:25.562 24197.908 - 24298.732: 99.4147% ( 4) 00:07:25.562 24298.732 - 24399.557: 99.4356% ( 4) 00:07:25.562 24399.557 - 24500.382: 99.4565% ( 4) 00:07:25.562 24500.382 - 24601.206: 99.4774% ( 4) 00:07:25.562 24601.206 - 24702.031: 99.4983% ( 4) 00:07:25.562 24702.031 - 24802.855: 99.5192% ( 4) 00:07:25.562 24802.855 - 24903.680: 99.5401% ( 4) 00:07:25.562 24903.680 - 25004.505: 99.5663% ( 5) 00:07:25.562 25004.505 - 25105.329: 99.5872% ( 4) 00:07:25.562 25105.329 - 25206.154: 99.6081% ( 4) 00:07:25.562 25206.154 - 25306.978: 99.6290% ( 4) 00:07:25.562 25306.978 - 25407.803: 99.6551% ( 5) 00:07:25.562 25407.803 - 25508.628: 99.6656% ( 2) 00:07:25.562 29642.437 - 29844.086: 99.6812% ( 3) 00:07:25.562 29844.086 - 30045.735: 99.7074% ( 5) 00:07:25.562 30045.735 - 30247.385: 99.7387% ( 6) 00:07:25.562 30247.385 - 30449.034: 99.7648% ( 5) 00:07:25.562 30449.034 - 30650.683: 99.7962% ( 6) 00:07:25.562 30650.683 - 30852.332: 99.8276% ( 6) 00:07:25.562 30852.332 - 31053.982: 99.8589% ( 6) 00:07:25.562 31053.982 - 31255.631: 99.8850% ( 5) 00:07:25.562 31255.631 - 31457.280: 99.9164% ( 6) 00:07:25.562 31457.280 - 31658.929: 99.9477% ( 6) 00:07:25.562 31658.929 - 31860.578: 99.9791% ( 6) 00:07:25.562 31860.578 - 32062.228: 100.0000% ( 4) 00:07:25.562 00:07:25.562 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:25.562 ============================================================================== 00:07:25.562 Range in us Cumulative IO count 00:07:25.562 5494.942 - 5520.148: 0.0052% ( 1) 00:07:25.562 5520.148 - 5545.354: 0.0732% ( 13) 00:07:25.562 5545.354 - 5570.560: 0.1672% ( 18) 00:07:25.562 5570.560 - 5595.766: 0.5435% ( 72) 00:07:25.562 5595.766 - 5620.972: 1.0556% ( 98) 00:07:25.562 5620.972 - 5646.178: 1.7663% ( 136) 00:07:25.562 5646.178 - 5671.385: 2.6286% ( 165) 00:07:25.562 5671.385 - 5696.591: 3.5744% ( 181) 00:07:25.562 5696.591 - 5721.797: 4.4941% ( 176) 00:07:25.562 5721.797 - 5747.003: 5.8058% ( 251) 00:07:25.562 5747.003 - 5772.209: 7.3056% ( 287) 00:07:25.562 5772.209 - 5797.415: 9.0562% ( 335) 00:07:25.562 5797.415 - 5822.622: 11.1465% ( 400) 00:07:25.562 5822.622 - 5847.828: 13.2316% ( 399) 00:07:25.562 5847.828 - 5873.034: 15.4264% ( 420) 00:07:25.562 5873.034 - 5898.240: 17.7362% ( 442) 00:07:25.562 5898.240 - 5923.446: 19.9101% ( 416) 00:07:25.562 5923.446 - 5948.652: 22.2408% ( 446) 00:07:25.562 5948.652 - 5973.858: 24.6290% ( 457) 00:07:25.562 5973.858 - 5999.065: 26.9858% ( 451) 00:07:25.562 5999.065 - 6024.271: 29.3844% ( 459) 00:07:25.562 6024.271 - 6049.477: 31.7883% ( 460) 00:07:25.562 6049.477 - 6074.683: 34.1712% ( 456) 00:07:25.562 6074.683 - 6099.889: 36.5855% ( 462) 00:07:25.562 6099.889 - 6125.095: 39.0573% ( 473) 00:07:25.562 6125.095 - 6150.302: 41.4089% ( 450) 00:07:25.562 6150.302 - 6175.508: 43.8545% ( 468) 00:07:25.562 6175.508 - 6200.714: 46.2584% ( 460) 00:07:25.562 6200.714 - 6225.920: 48.7249% ( 472) 00:07:25.562 6225.920 - 6251.126: 51.1131% ( 457) 00:07:25.562 6251.126 - 6276.332: 53.5640% ( 469) 00:07:25.562 6276.332 - 6301.538: 55.9992% ( 466) 00:07:25.562 6301.538 - 6326.745: 58.4291% ( 465) 00:07:25.562 6326.745 - 6351.951: 60.9270% ( 478) 00:07:25.562 6351.951 - 6377.157: 63.3466% ( 463) 00:07:25.562 6377.157 - 6402.363: 65.6825% ( 447) 00:07:25.562 6402.363 - 6427.569: 67.8459% ( 414) 00:07:25.562 6427.569 - 6452.775: 69.8631% ( 386) 00:07:25.562 6452.775 - 6503.188: 73.5786% ( 711) 00:07:25.562 6503.188 - 6553.600: 76.9074% ( 637) 00:07:25.562 6553.600 - 6604.012: 79.5987% ( 515) 00:07:25.562 6604.012 - 6654.425: 81.4486% ( 354) 00:07:25.562 6654.425 - 6704.837: 82.6296% ( 226) 00:07:25.562 6704.837 - 6755.249: 83.3873% ( 145) 00:07:25.562 6755.249 - 6805.662: 84.0144% ( 120) 00:07:25.562 6805.662 - 6856.074: 84.6049% ( 113) 00:07:25.563 6856.074 - 6906.486: 85.1118% ( 97) 00:07:25.563 6906.486 - 6956.898: 85.5978% ( 93) 00:07:25.563 6956.898 - 7007.311: 86.0316% ( 83) 00:07:25.563 7007.311 - 7057.723: 86.4548% ( 81) 00:07:25.563 7057.723 - 7108.135: 86.9095% ( 87) 00:07:25.563 7108.135 - 7158.548: 87.3432% ( 83) 00:07:25.563 7158.548 - 7208.960: 87.6986% ( 68) 00:07:25.563 7208.960 - 7259.372: 88.0278% ( 63) 00:07:25.563 7259.372 - 7309.785: 88.3727% ( 66) 00:07:25.563 7309.785 - 7360.197: 88.7124% ( 65) 00:07:25.563 7360.197 - 7410.609: 89.0155% ( 58) 00:07:25.563 7410.609 - 7461.022: 89.2768% ( 50) 00:07:25.563 7461.022 - 7511.434: 89.5537% ( 53) 00:07:25.563 7511.434 - 7561.846: 89.8098% ( 49) 00:07:25.563 7561.846 - 7612.258: 90.0502% ( 46) 00:07:25.563 7612.258 - 7662.671: 90.3115% ( 50) 00:07:25.563 7662.671 - 7713.083: 90.5623% ( 48) 00:07:25.563 7713.083 - 7763.495: 90.7922% ( 44) 00:07:25.563 7763.495 - 7813.908: 91.0169% ( 43) 00:07:25.563 7813.908 - 7864.320: 91.2103% ( 37) 00:07:25.563 7864.320 - 7914.732: 91.3932% ( 35) 00:07:25.563 7914.732 - 7965.145: 91.5656% ( 33) 00:07:25.563 7965.145 - 8015.557: 91.7538% ( 36) 00:07:25.563 8015.557 - 8065.969: 91.9523% ( 38) 00:07:25.563 8065.969 - 8116.382: 92.1352% ( 35) 00:07:25.563 8116.382 - 8166.794: 92.3181% ( 35) 00:07:25.563 8166.794 - 8217.206: 92.5429% ( 43) 00:07:25.563 8217.206 - 8267.618: 92.7467% ( 39) 00:07:25.563 8267.618 - 8318.031: 92.8825% ( 26) 00:07:25.563 8318.031 - 8368.443: 93.0654% ( 35) 00:07:25.563 8368.443 - 8418.855: 93.2117% ( 28) 00:07:25.563 8418.855 - 8469.268: 93.3737% ( 31) 00:07:25.563 8469.268 - 8519.680: 93.5253% ( 29) 00:07:25.563 8519.680 - 8570.092: 93.6559% ( 25) 00:07:25.563 8570.092 - 8620.505: 93.8023% ( 28) 00:07:25.563 8620.505 - 8670.917: 93.9434% ( 27) 00:07:25.563 8670.917 - 8721.329: 94.0949% ( 29) 00:07:25.563 8721.329 - 8771.742: 94.2308% ( 26) 00:07:25.563 8771.742 - 8822.154: 94.3614% ( 25) 00:07:25.563 8822.154 - 8872.566: 94.4816% ( 23) 00:07:25.563 8872.566 - 8922.978: 94.5966% ( 22) 00:07:25.563 8922.978 - 8973.391: 94.7272% ( 25) 00:07:25.563 8973.391 - 9023.803: 94.8474% ( 23) 00:07:25.563 9023.803 - 9074.215: 94.9937% ( 28) 00:07:25.563 9074.215 - 9124.628: 95.1035% ( 21) 00:07:25.563 9124.628 - 9175.040: 95.2028% ( 19) 00:07:25.563 9175.040 - 9225.452: 95.2864% ( 16) 00:07:25.563 9225.452 - 9275.865: 95.3491% ( 12) 00:07:25.563 9275.865 - 9326.277: 95.4118% ( 12) 00:07:25.563 9326.277 - 9376.689: 95.4536% ( 8) 00:07:25.563 9376.689 - 9427.102: 95.5059% ( 10) 00:07:25.563 9427.102 - 9477.514: 95.5477% ( 8) 00:07:25.563 9477.514 - 9527.926: 95.5999% ( 10) 00:07:25.563 9527.926 - 9578.338: 95.6417% ( 8) 00:07:25.563 9578.338 - 9628.751: 95.7201% ( 15) 00:07:25.563 9628.751 - 9679.163: 95.7933% ( 14) 00:07:25.563 9679.163 - 9729.575: 95.8717% ( 15) 00:07:25.563 9729.575 - 9779.988: 95.9448% ( 14) 00:07:25.563 9779.988 - 9830.400: 96.0075% ( 12) 00:07:25.563 9830.400 - 9880.812: 96.0650% ( 11) 00:07:25.563 9880.812 - 9931.225: 96.1173% ( 10) 00:07:25.563 9931.225 - 9981.637: 96.1800% ( 12) 00:07:25.563 9981.637 - 10032.049: 96.2270% ( 9) 00:07:25.563 10032.049 - 10082.462: 96.2897% ( 12) 00:07:25.563 10082.462 - 10132.874: 96.3524% ( 12) 00:07:25.563 10132.874 - 10183.286: 96.4099% ( 11) 00:07:25.563 10183.286 - 10233.698: 96.4778% ( 13) 00:07:25.563 10233.698 - 10284.111: 96.5667% ( 17) 00:07:25.563 10284.111 - 10334.523: 96.6451% ( 15) 00:07:25.563 10334.523 - 10384.935: 96.7496% ( 20) 00:07:25.563 10384.935 - 10435.348: 96.8332% ( 16) 00:07:25.563 10435.348 - 10485.760: 96.9116% ( 15) 00:07:25.563 10485.760 - 10536.172: 97.0056% ( 18) 00:07:25.563 10536.172 - 10586.585: 97.0736% ( 13) 00:07:25.563 10586.585 - 10636.997: 97.1467% ( 14) 00:07:25.563 10636.997 - 10687.409: 97.2147% ( 13) 00:07:25.563 10687.409 - 10737.822: 97.3087% ( 18) 00:07:25.563 10737.822 - 10788.234: 97.3923% ( 16) 00:07:25.563 10788.234 - 10838.646: 97.4760% ( 16) 00:07:25.563 10838.646 - 10889.058: 97.5491% ( 14) 00:07:25.563 10889.058 - 10939.471: 97.6223% ( 14) 00:07:25.563 10939.471 - 10989.883: 97.6850% ( 12) 00:07:25.563 10989.883 - 11040.295: 97.7529% ( 13) 00:07:25.563 11040.295 - 11090.708: 97.8261% ( 14) 00:07:25.563 11090.708 - 11141.120: 97.8888% ( 12) 00:07:25.563 11141.120 - 11191.532: 97.9254% ( 7) 00:07:25.563 11191.532 - 11241.945: 97.9672% ( 8) 00:07:25.563 11241.945 - 11292.357: 98.0090% ( 8) 00:07:25.563 11292.357 - 11342.769: 98.0456% ( 7) 00:07:25.563 11342.769 - 11393.182: 98.0874% ( 8) 00:07:25.563 11393.182 - 11443.594: 98.1292% ( 8) 00:07:25.563 11443.594 - 11494.006: 98.1710% ( 8) 00:07:25.563 11494.006 - 11544.418: 98.2180% ( 9) 00:07:25.563 11544.418 - 11594.831: 98.2441% ( 5) 00:07:25.563 11594.831 - 11645.243: 98.2651% ( 4) 00:07:25.563 11645.243 - 11695.655: 98.2807% ( 3) 00:07:25.563 11695.655 - 11746.068: 98.2912% ( 2) 00:07:25.563 11746.068 - 11796.480: 98.3016% ( 2) 00:07:25.563 11796.480 - 11846.892: 98.3121% ( 2) 00:07:25.563 11846.892 - 11897.305: 98.3225% ( 2) 00:07:25.563 11897.305 - 11947.717: 98.3278% ( 1) 00:07:25.563 12451.840 - 12502.252: 98.3330% ( 1) 00:07:25.563 12502.252 - 12552.665: 98.3539% ( 4) 00:07:25.563 12552.665 - 12603.077: 98.3696% ( 3) 00:07:25.563 12603.077 - 12653.489: 98.3957% ( 5) 00:07:25.563 12653.489 - 12703.902: 98.4270% ( 6) 00:07:25.563 12703.902 - 12754.314: 98.4584% ( 6) 00:07:25.563 12754.314 - 12804.726: 98.4898% ( 6) 00:07:25.563 12804.726 - 12855.138: 98.5211% ( 6) 00:07:25.563 12855.138 - 12905.551: 98.5525% ( 6) 00:07:25.563 12905.551 - 13006.375: 98.6152% ( 12) 00:07:25.563 13006.375 - 13107.200: 98.6727% ( 11) 00:07:25.563 13107.200 - 13208.025: 98.7301% ( 11) 00:07:25.563 13208.025 - 13308.849: 98.7876% ( 11) 00:07:25.563 13308.849 - 13409.674: 98.8242% ( 7) 00:07:25.563 13409.674 - 13510.498: 98.8451% ( 4) 00:07:25.563 13510.498 - 13611.323: 98.8712% ( 5) 00:07:25.563 13611.323 - 13712.148: 98.8921% ( 4) 00:07:25.563 13712.148 - 13812.972: 98.9130% ( 4) 00:07:25.563 13812.972 - 13913.797: 98.9339% ( 4) 00:07:25.563 13913.797 - 14014.622: 98.9758% ( 8) 00:07:25.563 14014.622 - 14115.446: 99.0228% ( 9) 00:07:25.563 14115.446 - 14216.271: 99.0646% ( 8) 00:07:25.563 14216.271 - 14317.095: 99.0855% ( 4) 00:07:25.563 14317.095 - 14417.920: 99.1064% ( 4) 00:07:25.564 14417.920 - 14518.745: 99.1273% ( 4) 00:07:25.564 14518.745 - 14619.569: 99.1482% ( 4) 00:07:25.564 14619.569 - 14720.394: 99.1743% ( 5) 00:07:25.564 14720.394 - 14821.218: 99.1952% ( 4) 00:07:25.564 14821.218 - 14922.043: 99.2109% ( 3) 00:07:25.564 14922.043 - 15022.868: 99.2318% ( 4) 00:07:25.564 15022.868 - 15123.692: 99.2579% ( 5) 00:07:25.564 15123.692 - 15224.517: 99.2736% ( 3) 00:07:25.564 15224.517 - 15325.342: 99.2997% ( 5) 00:07:25.564 15325.342 - 15426.166: 99.3154% ( 3) 00:07:25.564 15426.166 - 15526.991: 99.3311% ( 3) 00:07:25.564 22080.591 - 22181.415: 99.3416% ( 2) 00:07:25.564 22181.415 - 22282.240: 99.3625% ( 4) 00:07:25.564 22282.240 - 22383.065: 99.3834% ( 4) 00:07:25.564 22383.065 - 22483.889: 99.4043% ( 4) 00:07:25.564 22483.889 - 22584.714: 99.4252% ( 4) 00:07:25.564 22584.714 - 22685.538: 99.4513% ( 5) 00:07:25.564 22685.538 - 22786.363: 99.4722% ( 4) 00:07:25.564 22786.363 - 22887.188: 99.4931% ( 4) 00:07:25.564 22887.188 - 22988.012: 99.5140% ( 4) 00:07:25.564 22988.012 - 23088.837: 99.5349% ( 4) 00:07:25.564 23088.837 - 23189.662: 99.5558% ( 4) 00:07:25.564 23189.662 - 23290.486: 99.5767% ( 4) 00:07:25.564 23290.486 - 23391.311: 99.6028% ( 5) 00:07:25.564 23391.311 - 23492.135: 99.6237% ( 4) 00:07:25.564 23492.135 - 23592.960: 99.6446% ( 4) 00:07:25.564 23592.960 - 23693.785: 99.6656% ( 4) 00:07:25.564 28634.191 - 28835.840: 99.6812% ( 3) 00:07:25.564 28835.840 - 29037.489: 99.7126% ( 6) 00:07:25.564 29037.489 - 29239.138: 99.7439% ( 6) 00:07:25.564 29239.138 - 29440.788: 99.7753% ( 6) 00:07:25.564 29440.788 - 29642.437: 99.8014% ( 5) 00:07:25.564 29642.437 - 29844.086: 99.8328% ( 6) 00:07:25.564 29844.086 - 30045.735: 99.8589% ( 5) 00:07:25.564 30045.735 - 30247.385: 99.8903% ( 6) 00:07:25.564 30247.385 - 30449.034: 99.9216% ( 6) 00:07:25.564 30449.034 - 30650.683: 99.9477% ( 5) 00:07:25.564 30650.683 - 30852.332: 99.9791% ( 6) 00:07:25.564 30852.332 - 31053.982: 100.0000% ( 4) 00:07:25.564 00:07:25.564 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:25.564 ============================================================================== 00:07:25.564 Range in us Cumulative IO count 00:07:25.564 5520.148 - 5545.354: 0.0105% ( 2) 00:07:25.564 5545.354 - 5570.560: 0.1359% ( 24) 00:07:25.564 5570.560 - 5595.766: 0.4808% ( 66) 00:07:25.564 5595.766 - 5620.972: 0.9563% ( 91) 00:07:25.564 5620.972 - 5646.178: 1.6670% ( 136) 00:07:25.564 5646.178 - 5671.385: 2.5084% ( 161) 00:07:25.564 5671.385 - 5696.591: 3.5117% ( 192) 00:07:25.564 5696.591 - 5721.797: 4.4628% ( 182) 00:07:25.564 5721.797 - 5747.003: 5.6752% ( 232) 00:07:25.564 5747.003 - 5772.209: 7.2115% ( 294) 00:07:25.564 5772.209 - 5797.415: 9.0353% ( 349) 00:07:25.564 5797.415 - 5822.622: 10.9793% ( 372) 00:07:25.564 5822.622 - 5847.828: 13.1793% ( 421) 00:07:25.564 5847.828 - 5873.034: 15.3794% ( 421) 00:07:25.564 5873.034 - 5898.240: 17.6787% ( 440) 00:07:25.564 5898.240 - 5923.446: 19.9415% ( 433) 00:07:25.564 5923.446 - 5948.652: 22.3349% ( 458) 00:07:25.564 5948.652 - 5973.858: 24.7439% ( 461) 00:07:25.564 5973.858 - 5999.065: 27.1112% ( 453) 00:07:25.564 5999.065 - 6024.271: 29.4367% ( 445) 00:07:25.564 6024.271 - 6049.477: 31.8144% ( 455) 00:07:25.564 6049.477 - 6074.683: 34.1503% ( 447) 00:07:25.564 6074.683 - 6099.889: 36.5123% ( 452) 00:07:25.564 6099.889 - 6125.095: 38.9475% ( 466) 00:07:25.564 6125.095 - 6150.302: 41.2678% ( 444) 00:07:25.564 6150.302 - 6175.508: 43.6768% ( 461) 00:07:25.564 6175.508 - 6200.714: 46.0859% ( 461) 00:07:25.564 6200.714 - 6225.920: 48.4898% ( 460) 00:07:25.564 6225.920 - 6251.126: 50.8675% ( 455) 00:07:25.564 6251.126 - 6276.332: 53.2922% ( 464) 00:07:25.564 6276.332 - 6301.538: 55.7901% ( 478) 00:07:25.564 6301.538 - 6326.745: 58.2410% ( 469) 00:07:25.564 6326.745 - 6351.951: 60.6501% ( 461) 00:07:25.564 6351.951 - 6377.157: 63.0748% ( 464) 00:07:25.564 6377.157 - 6402.363: 65.4473% ( 454) 00:07:25.564 6402.363 - 6427.569: 67.6630% ( 424) 00:07:25.564 6427.569 - 6452.775: 69.6854% ( 387) 00:07:25.564 6452.775 - 6503.188: 73.4009% ( 711) 00:07:25.564 6503.188 - 6553.600: 76.7559% ( 642) 00:07:25.564 6553.600 - 6604.012: 79.5673% ( 538) 00:07:25.564 6604.012 - 6654.425: 81.4120% ( 353) 00:07:25.564 6654.425 - 6704.837: 82.6505% ( 237) 00:07:25.564 6704.837 - 6755.249: 83.5702% ( 176) 00:07:25.564 6755.249 - 6805.662: 84.3123% ( 142) 00:07:25.564 6805.662 - 6856.074: 84.9133% ( 115) 00:07:25.564 6856.074 - 6906.486: 85.5403% ( 120) 00:07:25.564 6906.486 - 6956.898: 86.0838% ( 104) 00:07:25.564 6956.898 - 7007.311: 86.5803% ( 95) 00:07:25.564 7007.311 - 7057.723: 87.0245% ( 85) 00:07:25.564 7057.723 - 7108.135: 87.4582% ( 83) 00:07:25.564 7108.135 - 7158.548: 87.8449% ( 74) 00:07:25.564 7158.548 - 7208.960: 88.1950% ( 67) 00:07:25.564 7208.960 - 7259.372: 88.5295% ( 64) 00:07:25.564 7259.372 - 7309.785: 88.8901% ( 69) 00:07:25.564 7309.785 - 7360.197: 89.1775% ( 55) 00:07:25.564 7360.197 - 7410.609: 89.4440% ( 51) 00:07:25.564 7410.609 - 7461.022: 89.7105% ( 51) 00:07:25.564 7461.022 - 7511.434: 89.9143% ( 39) 00:07:25.564 7511.434 - 7561.846: 90.1286% ( 41) 00:07:25.564 7561.846 - 7612.258: 90.3428% ( 41) 00:07:25.564 7612.258 - 7662.671: 90.5153% ( 33) 00:07:25.564 7662.671 - 7713.083: 90.7295% ( 41) 00:07:25.564 7713.083 - 7763.495: 90.9385% ( 40) 00:07:25.564 7763.495 - 7813.908: 91.1371% ( 38) 00:07:25.564 7813.908 - 7864.320: 91.3200% ( 35) 00:07:25.564 7864.320 - 7914.732: 91.4768% ( 30) 00:07:25.564 7914.732 - 7965.145: 91.6074% ( 25) 00:07:25.564 7965.145 - 8015.557: 91.7276% ( 23) 00:07:25.564 8015.557 - 8065.969: 91.8740% ( 28) 00:07:25.564 8065.969 - 8116.382: 92.0151% ( 27) 00:07:25.564 8116.382 - 8166.794: 92.1509% ( 26) 00:07:25.564 8166.794 - 8217.206: 92.3234% ( 33) 00:07:25.564 8217.206 - 8267.618: 92.4854% ( 31) 00:07:25.564 8267.618 - 8318.031: 92.6212% ( 26) 00:07:25.564 8318.031 - 8368.443: 92.7519% ( 25) 00:07:25.564 8368.443 - 8418.855: 92.8982% ( 28) 00:07:25.564 8418.855 - 8469.268: 93.0393% ( 27) 00:07:25.564 8469.268 - 8519.680: 93.1908% ( 29) 00:07:25.564 8519.680 - 8570.092: 93.3790% ( 36) 00:07:25.564 8570.092 - 8620.505: 93.5201% ( 27) 00:07:25.564 8620.505 - 8670.917: 93.6768% ( 30) 00:07:25.564 8670.917 - 8721.329: 93.8232% ( 28) 00:07:25.564 8721.329 - 8771.742: 93.9590% ( 26) 00:07:25.564 8771.742 - 8822.154: 94.1367% ( 34) 00:07:25.564 8822.154 - 8872.566: 94.2778% ( 27) 00:07:25.564 8872.566 - 8922.978: 94.4555% ( 34) 00:07:25.564 8922.978 - 8973.391: 94.6384% ( 35) 00:07:25.564 8973.391 - 9023.803: 94.7429% ( 20) 00:07:25.564 9023.803 - 9074.215: 94.8474% ( 20) 00:07:25.564 9074.215 - 9124.628: 94.9362% ( 17) 00:07:25.564 9124.628 - 9175.040: 95.0512% ( 22) 00:07:25.564 9175.040 - 9225.452: 95.1557% ( 20) 00:07:25.564 9225.452 - 9275.865: 95.2550% ( 19) 00:07:25.564 9275.865 - 9326.277: 95.3491% ( 18) 00:07:25.564 9326.277 - 9376.689: 95.4118% ( 12) 00:07:25.564 9376.689 - 9427.102: 95.4954% ( 16) 00:07:25.564 9427.102 - 9477.514: 95.5529% ( 11) 00:07:25.564 9477.514 - 9527.926: 95.6365% ( 16) 00:07:25.564 9527.926 - 9578.338: 95.7462% ( 21) 00:07:25.564 9578.338 - 9628.751: 95.8246% ( 15) 00:07:25.564 9628.751 - 9679.163: 95.8873% ( 12) 00:07:25.564 9679.163 - 9729.575: 95.9553% ( 13) 00:07:25.564 9729.575 - 9779.988: 96.0546% ( 19) 00:07:25.564 9779.988 - 9830.400: 96.1173% ( 12) 00:07:25.564 9830.400 - 9880.812: 96.2009% ( 16) 00:07:25.564 9880.812 - 9931.225: 96.2897% ( 17) 00:07:25.564 9931.225 - 9981.637: 96.3786% ( 17) 00:07:25.564 9981.637 - 10032.049: 96.4569% ( 15) 00:07:25.564 10032.049 - 10082.462: 96.5249% ( 13) 00:07:25.564 10082.462 - 10132.874: 96.5980% ( 14) 00:07:25.564 10132.874 - 10183.286: 96.6607% ( 12) 00:07:25.564 10183.286 - 10233.698: 96.7287% ( 13) 00:07:25.564 10233.698 - 10284.111: 96.7966% ( 13) 00:07:25.564 10284.111 - 10334.523: 96.8645% ( 13) 00:07:25.564 10334.523 - 10384.935: 96.9325% ( 13) 00:07:25.564 10384.935 - 10435.348: 97.0004% ( 13) 00:07:25.565 10435.348 - 10485.760: 97.0474% ( 9) 00:07:25.565 10485.760 - 10536.172: 97.0893% ( 8) 00:07:25.565 10536.172 - 10586.585: 97.1206% ( 6) 00:07:25.565 10586.585 - 10636.997: 97.1415% ( 4) 00:07:25.565 10636.997 - 10687.409: 97.1833% ( 8) 00:07:25.565 10687.409 - 10737.822: 97.2513% ( 13) 00:07:25.565 10737.822 - 10788.234: 97.3296% ( 15) 00:07:25.565 10788.234 - 10838.646: 97.4133% ( 16) 00:07:25.565 10838.646 - 10889.058: 97.4812% ( 13) 00:07:25.565 10889.058 - 10939.471: 97.5491% ( 13) 00:07:25.565 10939.471 - 10989.883: 97.6171% ( 13) 00:07:25.565 10989.883 - 11040.295: 97.6798% ( 12) 00:07:25.565 11040.295 - 11090.708: 97.7320% ( 10) 00:07:25.565 11090.708 - 11141.120: 97.7895% ( 11) 00:07:25.565 11141.120 - 11191.532: 97.8365% ( 9) 00:07:25.565 11191.532 - 11241.945: 97.8888% ( 10) 00:07:25.565 11241.945 - 11292.357: 97.9306% ( 8) 00:07:25.565 11292.357 - 11342.769: 97.9881% ( 11) 00:07:25.565 11342.769 - 11393.182: 98.0403% ( 10) 00:07:25.565 11393.182 - 11443.594: 98.0874% ( 9) 00:07:25.565 11443.594 - 11494.006: 98.1344% ( 9) 00:07:25.565 11494.006 - 11544.418: 98.1658% ( 6) 00:07:25.565 11544.418 - 11594.831: 98.1814% ( 3) 00:07:25.565 11594.831 - 11645.243: 98.1919% ( 2) 00:07:25.565 11645.243 - 11695.655: 98.2023% ( 2) 00:07:25.565 11695.655 - 11746.068: 98.2128% ( 2) 00:07:25.565 11746.068 - 11796.480: 98.2232% ( 2) 00:07:25.565 11796.480 - 11846.892: 98.2337% ( 2) 00:07:25.565 11846.892 - 11897.305: 98.2441% ( 2) 00:07:25.565 11897.305 - 11947.717: 98.2598% ( 3) 00:07:25.565 11947.717 - 11998.129: 98.2703% ( 2) 00:07:25.565 11998.129 - 12048.542: 98.2807% ( 2) 00:07:25.565 12048.542 - 12098.954: 98.2912% ( 2) 00:07:25.565 12098.954 - 12149.366: 98.3016% ( 2) 00:07:25.565 12149.366 - 12199.778: 98.3121% ( 2) 00:07:25.565 12199.778 - 12250.191: 98.3225% ( 2) 00:07:25.565 12250.191 - 12300.603: 98.3278% ( 1) 00:07:25.565 12552.665 - 12603.077: 98.3539% ( 5) 00:07:25.565 12603.077 - 12653.489: 98.3852% ( 6) 00:07:25.565 12653.489 - 12703.902: 98.4009% ( 3) 00:07:25.565 12703.902 - 12754.314: 98.4114% ( 2) 00:07:25.565 12754.314 - 12804.726: 98.4323% ( 4) 00:07:25.565 12804.726 - 12855.138: 98.4532% ( 4) 00:07:25.565 12855.138 - 12905.551: 98.4741% ( 4) 00:07:25.565 12905.551 - 13006.375: 98.5107% ( 7) 00:07:25.565 13006.375 - 13107.200: 98.5525% ( 8) 00:07:25.565 13107.200 - 13208.025: 98.6047% ( 10) 00:07:25.565 13208.025 - 13308.849: 98.6570% ( 10) 00:07:25.565 13308.849 - 13409.674: 98.7197% ( 12) 00:07:25.565 13409.674 - 13510.498: 98.7458% ( 5) 00:07:25.565 13510.498 - 13611.323: 98.7876% ( 8) 00:07:25.565 13611.323 - 13712.148: 98.8347% ( 9) 00:07:25.565 13712.148 - 13812.972: 98.8712% ( 7) 00:07:25.565 13812.972 - 13913.797: 98.9130% ( 8) 00:07:25.565 13913.797 - 14014.622: 98.9601% ( 9) 00:07:25.565 14014.622 - 14115.446: 99.0019% ( 8) 00:07:25.565 14115.446 - 14216.271: 99.0437% ( 8) 00:07:25.565 14216.271 - 14317.095: 99.0855% ( 8) 00:07:25.565 14317.095 - 14417.920: 99.1325% ( 9) 00:07:25.565 14417.920 - 14518.745: 99.1743% ( 8) 00:07:25.565 14518.745 - 14619.569: 99.2161% ( 8) 00:07:25.565 14619.569 - 14720.394: 99.2579% ( 8) 00:07:25.565 14720.394 - 14821.218: 99.2736% ( 3) 00:07:25.565 14821.218 - 14922.043: 99.2997% ( 5) 00:07:25.565 14922.043 - 15022.868: 99.3207% ( 4) 00:07:25.565 15022.868 - 15123.692: 99.3311% ( 2) 00:07:25.565 20265.748 - 20366.572: 99.3416% ( 2) 00:07:25.565 20366.572 - 20467.397: 99.3625% ( 4) 00:07:25.565 20467.397 - 20568.222: 99.3834% ( 4) 00:07:25.565 20568.222 - 20669.046: 99.4095% ( 5) 00:07:25.565 20669.046 - 20769.871: 99.4304% ( 4) 00:07:25.565 20769.871 - 20870.695: 99.4513% ( 4) 00:07:25.565 20870.695 - 20971.520: 99.4774% ( 5) 00:07:25.565 20971.520 - 21072.345: 99.4983% ( 4) 00:07:25.565 21072.345 - 21173.169: 99.5192% ( 4) 00:07:25.565 21173.169 - 21273.994: 99.5401% ( 4) 00:07:25.565 21273.994 - 21374.818: 99.5610% ( 4) 00:07:25.565 21374.818 - 21475.643: 99.5872% ( 5) 00:07:25.565 21475.643 - 21576.468: 99.6028% ( 3) 00:07:25.565 21576.468 - 21677.292: 99.6237% ( 4) 00:07:25.565 21677.292 - 21778.117: 99.6499% ( 5) 00:07:25.565 21778.117 - 21878.942: 99.6656% ( 3) 00:07:25.565 27625.945 - 27827.594: 99.6917% ( 5) 00:07:25.565 27827.594 - 28029.243: 99.7230% ( 6) 00:07:25.565 28029.243 - 28230.892: 99.7492% ( 5) 00:07:25.565 28230.892 - 28432.542: 99.7805% ( 6) 00:07:25.565 28432.542 - 28634.191: 99.8119% ( 6) 00:07:25.565 28634.191 - 28835.840: 99.8380% ( 5) 00:07:25.565 28835.840 - 29037.489: 99.8641% ( 5) 00:07:25.565 29037.489 - 29239.138: 99.8903% ( 5) 00:07:25.565 29239.138 - 29440.788: 99.9321% ( 8) 00:07:25.565 29440.788 - 29642.437: 99.9739% ( 8) 00:07:25.565 29642.437 - 29844.086: 100.0000% ( 5) 00:07:25.565 00:07:25.565 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:25.565 ============================================================================== 00:07:25.565 Range in us Cumulative IO count 00:07:25.565 5494.942 - 5520.148: 0.0208% ( 4) 00:07:25.565 5520.148 - 5545.354: 0.0938% ( 14) 00:07:25.565 5545.354 - 5570.560: 0.2708% ( 34) 00:07:25.565 5570.560 - 5595.766: 0.6354% ( 70) 00:07:25.565 5595.766 - 5620.972: 1.1615% ( 101) 00:07:25.565 5620.972 - 5646.178: 1.7604% ( 115) 00:07:25.565 5646.178 - 5671.385: 2.5938% ( 160) 00:07:25.565 5671.385 - 5696.591: 3.6042% ( 194) 00:07:25.565 5696.591 - 5721.797: 4.6562% ( 202) 00:07:25.565 5721.797 - 5747.003: 5.9167% ( 242) 00:07:25.565 5747.003 - 5772.209: 7.4479% ( 294) 00:07:25.565 5772.209 - 5797.415: 9.1510% ( 327) 00:07:25.565 5797.415 - 5822.622: 11.2031% ( 394) 00:07:25.565 5822.622 - 5847.828: 13.1771% ( 379) 00:07:25.565 5847.828 - 5873.034: 15.3177% ( 411) 00:07:25.565 5873.034 - 5898.240: 17.5208% ( 423) 00:07:25.565 5898.240 - 5923.446: 19.8490% ( 447) 00:07:25.565 5923.446 - 5948.652: 22.1667% ( 445) 00:07:25.565 5948.652 - 5973.858: 24.4323% ( 435) 00:07:25.565 5973.858 - 5999.065: 26.7760% ( 450) 00:07:25.565 5999.065 - 6024.271: 29.1042% ( 447) 00:07:25.565 6024.271 - 6049.477: 31.4427% ( 449) 00:07:25.565 6049.477 - 6074.683: 33.7969% ( 452) 00:07:25.565 6074.683 - 6099.889: 36.1667% ( 455) 00:07:25.565 6099.889 - 6125.095: 38.5677% ( 461) 00:07:25.565 6125.095 - 6150.302: 41.0000% ( 467) 00:07:25.565 6150.302 - 6175.508: 43.3958% ( 460) 00:07:25.565 6175.508 - 6200.714: 45.7969% ( 461) 00:07:25.565 6200.714 - 6225.920: 48.1615% ( 454) 00:07:25.565 6225.920 - 6251.126: 50.6198% ( 472) 00:07:25.565 6251.126 - 6276.332: 53.0156% ( 460) 00:07:25.565 6276.332 - 6301.538: 55.4531% ( 468) 00:07:25.565 6301.538 - 6326.745: 57.8281% ( 456) 00:07:25.565 6326.745 - 6351.951: 60.1615% ( 448) 00:07:25.565 6351.951 - 6377.157: 62.6146% ( 471) 00:07:25.565 6377.157 - 6402.363: 64.9375% ( 446) 00:07:25.565 6402.363 - 6427.569: 67.1042% ( 416) 00:07:25.565 6427.569 - 6452.775: 69.1823% ( 399) 00:07:25.565 6452.775 - 6503.188: 72.9271% ( 719) 00:07:25.565 6503.188 - 6553.600: 76.2448% ( 637) 00:07:25.565 6553.600 - 6604.012: 78.9271% ( 515) 00:07:25.565 6604.012 - 6654.425: 80.7656% ( 353) 00:07:25.565 6654.425 - 6704.837: 82.0625% ( 249) 00:07:25.565 6704.837 - 6755.249: 83.0104% ( 182) 00:07:25.565 6755.249 - 6805.662: 83.6875% ( 130) 00:07:25.565 6805.662 - 6856.074: 84.3073% ( 119) 00:07:25.565 6856.074 - 6906.486: 84.9010% ( 114) 00:07:25.565 6906.486 - 6956.898: 85.4792% ( 111) 00:07:25.565 6956.898 - 7007.311: 85.9635% ( 93) 00:07:25.565 7007.311 - 7057.723: 86.4115% ( 86) 00:07:25.565 7057.723 - 7108.135: 86.8698% ( 88) 00:07:25.565 7108.135 - 7158.548: 87.3385% ( 90) 00:07:25.565 7158.548 - 7208.960: 87.7552% ( 80) 00:07:25.565 7208.960 - 7259.372: 88.1615% ( 78) 00:07:25.565 7259.372 - 7309.785: 88.4896% ( 63) 00:07:25.565 7309.785 - 7360.197: 88.8073% ( 61) 00:07:25.565 7360.197 - 7410.609: 89.1250% ( 61) 00:07:25.565 7410.609 - 7461.022: 89.3906% ( 51) 00:07:25.565 7461.022 - 7511.434: 89.6198% ( 44) 00:07:25.565 7511.434 - 7561.846: 89.7969% ( 34) 00:07:25.565 7561.846 - 7612.258: 89.9479% ( 29) 00:07:25.566 7612.258 - 7662.671: 90.1250% ( 34) 00:07:25.566 7662.671 - 7713.083: 90.2760% ( 29) 00:07:25.566 7713.083 - 7763.495: 90.4323% ( 30) 00:07:25.566 7763.495 - 7813.908: 90.5729% ( 27) 00:07:25.566 7813.908 - 7864.320: 90.7917% ( 42) 00:07:25.566 7864.320 - 7914.732: 90.9531% ( 31) 00:07:25.566 7914.732 - 7965.145: 91.0833% ( 25) 00:07:25.566 7965.145 - 8015.557: 91.2344% ( 29) 00:07:25.566 8015.557 - 8065.969: 91.3854% ( 29) 00:07:25.566 8065.969 - 8116.382: 91.5312% ( 28) 00:07:25.566 8116.382 - 8166.794: 91.6875% ( 30) 00:07:25.566 8166.794 - 8217.206: 91.8542% ( 32) 00:07:25.566 8217.206 - 8267.618: 92.0208% ( 32) 00:07:25.566 8267.618 - 8318.031: 92.1823% ( 31) 00:07:25.566 8318.031 - 8368.443: 92.3490% ( 32) 00:07:25.566 8368.443 - 8418.855: 92.5208% ( 33) 00:07:25.566 8418.855 - 8469.268: 92.6875% ( 32) 00:07:25.566 8469.268 - 8519.680: 92.8594% ( 33) 00:07:25.566 8519.680 - 8570.092: 93.0312% ( 33) 00:07:25.566 8570.092 - 8620.505: 93.2344% ( 39) 00:07:25.566 8620.505 - 8670.917: 93.4271% ( 37) 00:07:25.566 8670.917 - 8721.329: 93.6146% ( 36) 00:07:25.566 8721.329 - 8771.742: 93.8125% ( 38) 00:07:25.566 8771.742 - 8822.154: 93.9844% ( 33) 00:07:25.566 8822.154 - 8872.566: 94.1302% ( 28) 00:07:25.566 8872.566 - 8922.978: 94.2708% ( 27) 00:07:25.566 8922.978 - 8973.391: 94.3854% ( 22) 00:07:25.566 8973.391 - 9023.803: 94.5104% ( 24) 00:07:25.566 9023.803 - 9074.215: 94.6354% ( 24) 00:07:25.566 9074.215 - 9124.628: 94.7500% ( 22) 00:07:25.566 9124.628 - 9175.040: 94.8854% ( 26) 00:07:25.566 9175.040 - 9225.452: 95.0365% ( 29) 00:07:25.566 9225.452 - 9275.865: 95.1771% ( 27) 00:07:25.566 9275.865 - 9326.277: 95.3021% ( 24) 00:07:25.566 9326.277 - 9376.689: 95.4219% ( 23) 00:07:25.566 9376.689 - 9427.102: 95.5365% ( 22) 00:07:25.566 9427.102 - 9477.514: 95.6406% ( 20) 00:07:25.566 9477.514 - 9527.926: 95.7292% ( 17) 00:07:25.566 9527.926 - 9578.338: 95.8177% ( 17) 00:07:25.566 9578.338 - 9628.751: 95.8958% ( 15) 00:07:25.566 9628.751 - 9679.163: 95.9792% ( 16) 00:07:25.566 9679.163 - 9729.575: 96.0625% ( 16) 00:07:25.566 9729.575 - 9779.988: 96.1771% ( 22) 00:07:25.566 9779.988 - 9830.400: 96.2604% ( 16) 00:07:25.566 9830.400 - 9880.812: 96.3594% ( 19) 00:07:25.566 9880.812 - 9931.225: 96.4479% ( 17) 00:07:25.566 9931.225 - 9981.637: 96.5208% ( 14) 00:07:25.566 9981.637 - 10032.049: 96.5677% ( 9) 00:07:25.566 10032.049 - 10082.462: 96.6094% ( 8) 00:07:25.566 10082.462 - 10132.874: 96.6615% ( 10) 00:07:25.566 10132.874 - 10183.286: 96.7083% ( 9) 00:07:25.566 10183.286 - 10233.698: 96.7500% ( 8) 00:07:25.566 10233.698 - 10284.111: 96.7969% ( 9) 00:07:25.566 10284.111 - 10334.523: 96.8385% ( 8) 00:07:25.566 10334.523 - 10384.935: 96.8958% ( 11) 00:07:25.566 10384.935 - 10435.348: 96.9375% ( 8) 00:07:25.566 10435.348 - 10485.760: 96.9844% ( 9) 00:07:25.566 10485.760 - 10536.172: 97.0104% ( 5) 00:07:25.566 10536.172 - 10586.585: 97.0417% ( 6) 00:07:25.566 10586.585 - 10636.997: 97.0781% ( 7) 00:07:25.566 10636.997 - 10687.409: 97.1146% ( 7) 00:07:25.566 10687.409 - 10737.822: 97.1458% ( 6) 00:07:25.566 10737.822 - 10788.234: 97.1719% ( 5) 00:07:25.566 10788.234 - 10838.646: 97.2031% ( 6) 00:07:25.566 10838.646 - 10889.058: 97.2344% ( 6) 00:07:25.566 10889.058 - 10939.471: 97.2656% ( 6) 00:07:25.566 10939.471 - 10989.883: 97.3125% ( 9) 00:07:25.566 10989.883 - 11040.295: 97.3802% ( 13) 00:07:25.566 11040.295 - 11090.708: 97.4271% ( 9) 00:07:25.566 11090.708 - 11141.120: 97.4896% ( 12) 00:07:25.566 11141.120 - 11191.532: 97.5417% ( 10) 00:07:25.566 11191.532 - 11241.945: 97.5990% ( 11) 00:07:25.566 11241.945 - 11292.357: 97.6615% ( 12) 00:07:25.566 11292.357 - 11342.769: 97.7240% ( 12) 00:07:25.566 11342.769 - 11393.182: 97.7760% ( 10) 00:07:25.566 11393.182 - 11443.594: 97.8281% ( 10) 00:07:25.566 11443.594 - 11494.006: 97.8802% ( 10) 00:07:25.566 11494.006 - 11544.418: 97.9167% ( 7) 00:07:25.566 11544.418 - 11594.831: 97.9583% ( 8) 00:07:25.566 11594.831 - 11645.243: 98.0000% ( 8) 00:07:25.566 11645.243 - 11695.655: 98.0417% ( 8) 00:07:25.566 11695.655 - 11746.068: 98.0833% ( 8) 00:07:25.566 11746.068 - 11796.480: 98.1250% ( 8) 00:07:25.566 11796.480 - 11846.892: 98.1719% ( 9) 00:07:25.566 11846.892 - 11897.305: 98.1979% ( 5) 00:07:25.566 11897.305 - 11947.717: 98.2188% ( 4) 00:07:25.566 11947.717 - 11998.129: 98.2396% ( 4) 00:07:25.566 11998.129 - 12048.542: 98.2604% ( 4) 00:07:25.566 12048.542 - 12098.954: 98.2812% ( 4) 00:07:25.566 12098.954 - 12149.366: 98.3073% ( 5) 00:07:25.566 12149.366 - 12199.778: 98.3281% ( 4) 00:07:25.566 12199.778 - 12250.191: 98.3542% ( 5) 00:07:25.566 12250.191 - 12300.603: 98.3750% ( 4) 00:07:25.566 12300.603 - 12351.015: 98.3958% ( 4) 00:07:25.566 12351.015 - 12401.428: 98.4115% ( 3) 00:07:25.566 12401.428 - 12451.840: 98.4323% ( 4) 00:07:25.566 12451.840 - 12502.252: 98.4531% ( 4) 00:07:25.566 12502.252 - 12552.665: 98.4740% ( 4) 00:07:25.566 12552.665 - 12603.077: 98.4948% ( 4) 00:07:25.566 12603.077 - 12653.489: 98.5104% ( 3) 00:07:25.566 12653.489 - 12703.902: 98.5260% ( 3) 00:07:25.566 12703.902 - 12754.314: 98.5521% ( 5) 00:07:25.566 12754.314 - 12804.726: 98.5729% ( 4) 00:07:25.566 12804.726 - 12855.138: 98.5938% ( 4) 00:07:25.566 12855.138 - 12905.551: 98.6146% ( 4) 00:07:25.566 12905.551 - 13006.375: 98.6510% ( 7) 00:07:25.566 13006.375 - 13107.200: 98.6875% ( 7) 00:07:25.566 13107.200 - 13208.025: 98.7344% ( 9) 00:07:25.566 13208.025 - 13308.849: 98.7708% ( 7) 00:07:25.566 13308.849 - 13409.674: 98.8125% ( 8) 00:07:25.566 13409.674 - 13510.498: 98.8333% ( 4) 00:07:25.566 13510.498 - 13611.323: 98.8542% ( 4) 00:07:25.566 13611.323 - 13712.148: 98.9010% ( 9) 00:07:25.566 13712.148 - 13812.972: 98.9427% ( 8) 00:07:25.566 13812.972 - 13913.797: 98.9844% ( 8) 00:07:25.566 13913.797 - 14014.622: 99.0208% ( 7) 00:07:25.566 14014.622 - 14115.446: 99.0573% ( 7) 00:07:25.566 14115.446 - 14216.271: 99.0990% ( 8) 00:07:25.566 14216.271 - 14317.095: 99.1250% ( 5) 00:07:25.566 14317.095 - 14417.920: 99.1615% ( 7) 00:07:25.566 14417.920 - 14518.745: 99.1823% ( 4) 00:07:25.566 14518.745 - 14619.569: 99.2031% ( 4) 00:07:25.566 14619.569 - 14720.394: 99.2292% ( 5) 00:07:25.566 14720.394 - 14821.218: 99.2500% ( 4) 00:07:25.566 14821.218 - 14922.043: 99.2708% ( 4) 00:07:25.566 14922.043 - 15022.868: 99.2917% ( 4) 00:07:25.566 15022.868 - 15123.692: 99.3125% ( 4) 00:07:25.566 15123.692 - 15224.517: 99.3333% ( 4) 00:07:25.566 15224.517 - 15325.342: 99.3438% ( 2) 00:07:25.566 15325.342 - 15426.166: 99.3646% ( 4) 00:07:25.566 15426.166 - 15526.991: 99.3854% ( 4) 00:07:25.566 15526.991 - 15627.815: 99.4062% ( 4) 00:07:25.566 15627.815 - 15728.640: 99.4271% ( 4) 00:07:25.566 15728.640 - 15829.465: 99.4479% ( 4) 00:07:25.566 15829.465 - 15930.289: 99.4740% ( 5) 00:07:25.566 15930.289 - 16031.114: 99.4948% ( 4) 00:07:25.566 16031.114 - 16131.938: 99.5156% ( 4) 00:07:25.566 16131.938 - 16232.763: 99.5365% ( 4) 00:07:25.566 16232.763 - 16333.588: 99.5625% ( 5) 00:07:25.566 16333.588 - 16434.412: 99.5833% ( 4) 00:07:25.566 16434.412 - 16535.237: 99.6042% ( 4) 00:07:25.566 16535.237 - 16636.062: 99.6250% ( 4) 00:07:25.566 16636.062 - 16736.886: 99.6510% ( 5) 00:07:25.566 16736.886 - 16837.711: 99.6667% ( 3) 00:07:25.566 19660.800 - 19761.625: 99.6823% ( 3) 00:07:25.566 19761.625 - 19862.449: 99.7083% ( 5) 00:07:25.566 19862.449 - 19963.274: 99.7292% ( 4) 00:07:25.566 19963.274 - 20064.098: 99.7500% ( 4) 00:07:25.566 20064.098 - 20164.923: 99.7760% ( 5) 00:07:25.566 20164.923 - 20265.748: 99.7969% ( 4) 00:07:25.566 20265.748 - 20366.572: 99.8177% ( 4) 00:07:25.567 20366.572 - 20467.397: 99.8385% ( 4) 00:07:25.567 20467.397 - 20568.222: 99.8594% ( 4) 00:07:25.567 20568.222 - 20669.046: 99.8854% ( 5) 00:07:25.567 20669.046 - 20769.871: 99.9062% ( 4) 00:07:25.567 20769.871 - 20870.695: 99.9271% ( 4) 00:07:25.567 20870.695 - 20971.520: 99.9479% ( 4) 00:07:25.567 20971.520 - 21072.345: 99.9688% ( 4) 00:07:25.567 21072.345 - 21173.169: 99.9896% ( 4) 00:07:25.567 21173.169 - 21273.994: 100.0000% ( 2) 00:07:25.567 00:07:25.567 02:02:50 nvme.nvme_perf -- nvme/nvme.sh@23 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -q 128 -w write -o 12288 -t 1 -LL -i 0 00:07:26.938 Initializing NVMe Controllers 00:07:26.938 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:07:26.938 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:07:26.938 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:07:26.938 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:07:26.938 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:07:26.938 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:07:26.938 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:07:26.938 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:07:26.938 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:07:26.938 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:07:26.938 Initialization complete. Launching workers. 00:07:26.938 ======================================================== 00:07:26.938 Latency(us) 00:07:26.938 Device Information : IOPS MiB/s Average min max 00:07:26.938 PCIE (0000:00:10.0) NSID 1 from core 0: 18323.31 214.73 6993.80 5747.60 29310.70 00:07:26.938 PCIE (0000:00:11.0) NSID 1 from core 0: 18323.31 214.73 6982.86 5831.01 27816.67 00:07:26.938 PCIE (0000:00:13.0) NSID 1 from core 0: 18323.31 214.73 6971.66 5924.15 26290.81 00:07:26.938 PCIE (0000:00:12.0) NSID 1 from core 0: 18323.31 214.73 6960.52 5861.47 24508.44 00:07:26.938 PCIE (0000:00:12.0) NSID 2 from core 0: 18323.31 214.73 6949.30 5855.10 22706.14 00:07:26.938 PCIE (0000:00:12.0) NSID 3 from core 0: 18387.15 215.47 6914.15 5789.65 17640.13 00:07:26.938 ======================================================== 00:07:26.938 Total : 110003.70 1289.11 6962.02 5747.60 29310.70 00:07:26.938 00:07:26.939 Summary latency data for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:26.939 ================================================================================= 00:07:26.939 1.00000% : 6099.889us 00:07:26.939 10.00000% : 6301.538us 00:07:26.939 25.00000% : 6427.569us 00:07:26.939 50.00000% : 6704.837us 00:07:26.939 75.00000% : 7007.311us 00:07:26.939 90.00000% : 7763.495us 00:07:26.939 95.00000% : 8368.443us 00:07:26.939 98.00000% : 9124.628us 00:07:26.939 99.00000% : 12804.726us 00:07:26.939 99.50000% : 24802.855us 00:07:26.939 99.90000% : 29037.489us 00:07:26.939 99.99000% : 29239.138us 00:07:26.939 99.99900% : 29440.788us 00:07:26.939 99.99990% : 29440.788us 00:07:26.939 99.99999% : 29440.788us 00:07:26.939 00:07:26.939 Summary latency data for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:26.939 ================================================================================= 00:07:26.939 1.00000% : 6200.714us 00:07:26.939 10.00000% : 6402.363us 00:07:26.939 25.00000% : 6503.188us 00:07:26.939 50.00000% : 6654.425us 00:07:26.939 75.00000% : 6956.898us 00:07:26.939 90.00000% : 7713.083us 00:07:26.939 95.00000% : 8318.031us 00:07:26.939 98.00000% : 9124.628us 00:07:26.939 99.00000% : 13812.972us 00:07:26.939 99.50000% : 22887.188us 00:07:26.939 99.90000% : 27424.295us 00:07:26.939 99.99000% : 27827.594us 00:07:26.939 99.99900% : 27827.594us 00:07:26.939 99.99990% : 27827.594us 00:07:26.939 99.99999% : 27827.594us 00:07:26.939 00:07:26.939 Summary latency data for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:26.939 ================================================================================= 00:07:26.939 1.00000% : 6225.920us 00:07:26.939 10.00000% : 6402.363us 00:07:26.939 25.00000% : 6503.188us 00:07:26.939 50.00000% : 6654.425us 00:07:26.939 75.00000% : 6906.486us 00:07:26.939 90.00000% : 7713.083us 00:07:26.939 95.00000% : 8318.031us 00:07:26.939 98.00000% : 9124.628us 00:07:26.939 99.00000% : 13913.797us 00:07:26.939 99.50000% : 21173.169us 00:07:26.939 99.90000% : 26012.751us 00:07:26.939 99.99000% : 26416.049us 00:07:26.939 99.99900% : 26416.049us 00:07:26.939 99.99990% : 26416.049us 00:07:26.939 99.99999% : 26416.049us 00:07:26.939 00:07:26.939 Summary latency data for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:26.939 ================================================================================= 00:07:26.939 1.00000% : 6200.714us 00:07:26.939 10.00000% : 6402.363us 00:07:26.939 25.00000% : 6503.188us 00:07:26.939 50.00000% : 6654.425us 00:07:26.939 75.00000% : 6906.486us 00:07:26.939 90.00000% : 7713.083us 00:07:26.939 95.00000% : 8368.443us 00:07:26.939 98.00000% : 9124.628us 00:07:26.939 99.00000% : 13712.148us 00:07:26.939 99.50000% : 19459.151us 00:07:26.939 99.90000% : 24097.083us 00:07:26.939 99.99000% : 24500.382us 00:07:26.939 99.99900% : 24601.206us 00:07:26.939 99.99990% : 24601.206us 00:07:26.939 99.99999% : 24601.206us 00:07:26.939 00:07:26.939 Summary latency data for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:26.939 ================================================================================= 00:07:26.939 1.00000% : 6200.714us 00:07:26.939 10.00000% : 6402.363us 00:07:26.939 25.00000% : 6503.188us 00:07:26.939 50.00000% : 6654.425us 00:07:26.939 75.00000% : 6906.486us 00:07:26.939 90.00000% : 7662.671us 00:07:26.939 95.00000% : 8418.855us 00:07:26.939 98.00000% : 9124.628us 00:07:26.939 99.00000% : 13107.200us 00:07:26.939 99.50000% : 17644.308us 00:07:26.939 99.90000% : 22282.240us 00:07:26.939 99.99000% : 22685.538us 00:07:26.939 99.99900% : 22786.363us 00:07:26.939 99.99990% : 22786.363us 00:07:26.939 99.99999% : 22786.363us 00:07:26.939 00:07:26.939 Summary latency data for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:26.939 ================================================================================= 00:07:26.939 1.00000% : 6200.714us 00:07:26.939 10.00000% : 6402.363us 00:07:26.939 25.00000% : 6553.600us 00:07:26.939 50.00000% : 6654.425us 00:07:26.939 75.00000% : 6906.486us 00:07:26.939 90.00000% : 7713.083us 00:07:26.939 95.00000% : 8368.443us 00:07:26.939 98.00000% : 9074.215us 00:07:26.939 99.00000% : 12401.428us 00:07:26.939 99.50000% : 12855.138us 00:07:26.939 99.90000% : 17241.009us 00:07:26.939 99.99000% : 17644.308us 00:07:26.939 99.99900% : 17644.308us 00:07:26.939 99.99990% : 17644.308us 00:07:26.939 99.99999% : 17644.308us 00:07:26.939 00:07:26.939 Latency histogram for PCIE (0000:00:10.0) NSID 1 from core 0: 00:07:26.939 ============================================================================== 00:07:26.939 Range in us Cumulative IO count 00:07:26.939 5747.003 - 5772.209: 0.0054% ( 1) 00:07:26.939 5772.209 - 5797.415: 0.0109% ( 1) 00:07:26.939 5797.415 - 5822.622: 0.0163% ( 1) 00:07:26.939 5822.622 - 5847.828: 0.0218% ( 1) 00:07:26.939 5847.828 - 5873.034: 0.0327% ( 2) 00:07:26.939 5873.034 - 5898.240: 0.0381% ( 1) 00:07:26.939 5898.240 - 5923.446: 0.0708% ( 6) 00:07:26.939 5923.446 - 5948.652: 0.1252% ( 10) 00:07:26.939 5948.652 - 5973.858: 0.1960% ( 13) 00:07:26.939 5973.858 - 5999.065: 0.3375% ( 26) 00:07:26.939 5999.065 - 6024.271: 0.5172% ( 33) 00:07:26.939 6024.271 - 6049.477: 0.6751% ( 29) 00:07:26.939 6049.477 - 6074.683: 0.9201% ( 45) 00:07:26.939 6074.683 - 6099.889: 1.2086% ( 53) 00:07:26.939 6099.889 - 6125.095: 1.5298% ( 59) 00:07:26.939 6125.095 - 6150.302: 2.0470% ( 95) 00:07:26.939 6150.302 - 6175.508: 2.6568% ( 112) 00:07:26.939 6175.508 - 6200.714: 3.6749% ( 187) 00:07:26.939 6200.714 - 6225.920: 4.9815% ( 240) 00:07:26.939 6225.920 - 6251.126: 6.7073% ( 317) 00:07:26.939 6251.126 - 6276.332: 8.9667% ( 415) 00:07:26.939 6276.332 - 6301.538: 11.4710% ( 460) 00:07:26.939 6301.538 - 6326.745: 14.3347% ( 526) 00:07:26.939 6326.745 - 6351.951: 17.2038% ( 527) 00:07:26.939 6351.951 - 6377.157: 19.8824% ( 492) 00:07:26.939 6377.157 - 6402.363: 22.7352% ( 524) 00:07:26.939 6402.363 - 6427.569: 25.8275% ( 568) 00:07:26.939 6427.569 - 6452.775: 28.7021% ( 528) 00:07:26.939 6452.775 - 6503.188: 33.9449% ( 963) 00:07:26.939 6503.188 - 6553.600: 38.4364% ( 825) 00:07:26.939 6553.600 - 6604.012: 43.2382% ( 882) 00:07:26.939 6604.012 - 6654.425: 47.9094% ( 858) 00:07:26.939 6654.425 - 6704.837: 52.4826% ( 840) 00:07:26.939 6704.837 - 6755.249: 57.1701% ( 861) 00:07:26.939 6755.249 - 6805.662: 61.7215% ( 836) 00:07:26.939 6805.662 - 6856.074: 65.3310% ( 663) 00:07:26.939 6856.074 - 6906.486: 68.5540% ( 592) 00:07:26.939 6906.486 - 6956.898: 71.7280% ( 583) 00:07:26.939 6956.898 - 7007.311: 75.1416% ( 627) 00:07:26.939 7007.311 - 7057.723: 77.7493% ( 479) 00:07:26.939 7057.723 - 7108.135: 79.7419% ( 366) 00:07:26.939 7108.135 - 7158.548: 81.3916% ( 303) 00:07:26.939 7158.548 - 7208.960: 82.6220% ( 226) 00:07:26.939 7208.960 - 7259.372: 83.8360% ( 223) 00:07:26.939 7259.372 - 7309.785: 84.8595% ( 188) 00:07:26.939 7309.785 - 7360.197: 85.7088% ( 156) 00:07:26.939 7360.197 - 7410.609: 86.5690% ( 158) 00:07:26.939 7410.609 - 7461.022: 87.3203% ( 138) 00:07:26.939 7461.022 - 7511.434: 87.8865% ( 104) 00:07:26.939 7511.434 - 7561.846: 88.4636% ( 106) 00:07:26.939 7561.846 - 7612.258: 88.9917% ( 97) 00:07:26.939 7612.258 - 7662.671: 89.3402% ( 64) 00:07:26.939 7662.671 - 7713.083: 89.9880% ( 119) 00:07:26.939 7713.083 - 7763.495: 90.5161% ( 97) 00:07:26.939 7763.495 - 7813.908: 91.0007% ( 89) 00:07:26.939 7813.908 - 7864.320: 91.4797% ( 88) 00:07:26.939 7864.320 - 7914.732: 91.8118% ( 61) 00:07:26.939 7914.732 - 7965.145: 92.5033% ( 127) 00:07:26.939 7965.145 - 8015.557: 92.9334% ( 79) 00:07:26.939 8015.557 - 8065.969: 93.2600% ( 60) 00:07:26.939 8065.969 - 8116.382: 93.5867% ( 60) 00:07:26.939 8116.382 - 8166.794: 93.8643% ( 51) 00:07:26.939 8166.794 - 8217.206: 94.1420% ( 51) 00:07:26.939 8217.206 - 8267.618: 94.5176% ( 69) 00:07:26.939 8267.618 - 8318.031: 94.8116% ( 54) 00:07:26.939 8318.031 - 8368.443: 95.1601% ( 64) 00:07:26.939 8368.443 - 8418.855: 95.3561% ( 36) 00:07:26.939 8418.855 - 8469.268: 95.5520% ( 36) 00:07:26.939 8469.268 - 8519.680: 95.7970% ( 45) 00:07:26.939 8519.680 - 8570.092: 96.0366% ( 44) 00:07:26.939 8570.092 - 8620.505: 96.3142% ( 51) 00:07:26.939 8620.505 - 8670.917: 96.6082% ( 54) 00:07:26.939 8670.917 - 8721.329: 96.8042% ( 36) 00:07:26.939 8721.329 - 8771.742: 97.0383% ( 43) 00:07:26.939 8771.742 - 8822.154: 97.1962% ( 29) 00:07:26.939 8822.154 - 8872.566: 97.3759% ( 33) 00:07:26.939 8872.566 - 8922.978: 97.5392% ( 30) 00:07:26.939 8922.978 - 8973.391: 97.6753% ( 25) 00:07:26.939 8973.391 - 9023.803: 97.7733% ( 18) 00:07:26.939 9023.803 - 9074.215: 97.9203% ( 27) 00:07:26.939 9074.215 - 9124.628: 98.0346% ( 21) 00:07:26.939 9124.628 - 9175.040: 98.1272% ( 17) 00:07:26.939 9175.040 - 9225.452: 98.1980% ( 13) 00:07:26.939 9225.452 - 9275.865: 98.2633% ( 12) 00:07:26.939 9275.865 - 9326.277: 98.3449% ( 15) 00:07:26.939 9326.277 - 9376.689: 98.4212% ( 14) 00:07:26.939 9376.689 - 9427.102: 98.4919% ( 13) 00:07:26.939 9427.102 - 9477.514: 98.5301% ( 7) 00:07:26.939 9527.926 - 9578.338: 98.5355% ( 1) 00:07:26.939 9578.338 - 9628.751: 98.5518% ( 3) 00:07:26.939 9628.751 - 9679.163: 98.5682% ( 3) 00:07:26.939 9679.163 - 9729.575: 98.5791% ( 2) 00:07:26.939 9729.575 - 9779.988: 98.5845% ( 1) 00:07:26.940 9779.988 - 9830.400: 98.5954% ( 2) 00:07:26.940 9830.400 - 9880.812: 98.6063% ( 2) 00:07:26.940 11241.945 - 11292.357: 98.6662% ( 11) 00:07:26.940 11292.357 - 11342.769: 98.6879% ( 4) 00:07:26.940 11342.769 - 11393.182: 98.6934% ( 1) 00:07:26.940 11393.182 - 11443.594: 98.7043% ( 2) 00:07:26.940 11443.594 - 11494.006: 98.7097% ( 1) 00:07:26.940 11544.418 - 11594.831: 98.7260% ( 3) 00:07:26.940 11796.480 - 11846.892: 98.7315% ( 1) 00:07:26.940 11846.892 - 11897.305: 98.7424% ( 2) 00:07:26.940 11947.717 - 11998.129: 98.7587% ( 3) 00:07:26.940 11998.129 - 12048.542: 98.7696% ( 2) 00:07:26.940 12048.542 - 12098.954: 98.7805% ( 2) 00:07:26.940 12098.954 - 12149.366: 98.7968% ( 3) 00:07:26.940 12149.366 - 12199.778: 98.8132% ( 3) 00:07:26.940 12199.778 - 12250.191: 98.8458% ( 6) 00:07:26.940 12250.191 - 12300.603: 98.8676% ( 4) 00:07:26.940 12300.603 - 12351.015: 98.8785% ( 2) 00:07:26.940 12351.015 - 12401.428: 98.8894% ( 2) 00:07:26.940 12451.840 - 12502.252: 98.9003% ( 2) 00:07:26.940 12502.252 - 12552.665: 98.9111% ( 2) 00:07:26.940 12552.665 - 12603.077: 98.9220% ( 2) 00:07:26.940 12603.077 - 12653.489: 98.9329% ( 2) 00:07:26.940 12653.489 - 12703.902: 98.9601% ( 5) 00:07:26.940 12703.902 - 12754.314: 98.9819% ( 4) 00:07:26.940 12754.314 - 12804.726: 99.0091% ( 5) 00:07:26.940 12804.726 - 12855.138: 99.0364% ( 5) 00:07:26.940 12855.138 - 12905.551: 99.0690% ( 6) 00:07:26.940 12905.551 - 13006.375: 99.0799% ( 2) 00:07:26.940 13006.375 - 13107.200: 99.1017% ( 4) 00:07:26.940 13107.200 - 13208.025: 99.1071% ( 1) 00:07:26.940 13208.025 - 13308.849: 99.1289% ( 4) 00:07:26.940 13308.849 - 13409.674: 99.1344% ( 1) 00:07:26.940 13409.674 - 13510.498: 99.1507% ( 3) 00:07:26.940 13510.498 - 13611.323: 99.1670% ( 3) 00:07:26.940 13611.323 - 13712.148: 99.1888% ( 4) 00:07:26.940 13712.148 - 13812.972: 99.2051% ( 3) 00:07:26.940 13812.972 - 13913.797: 99.2215% ( 3) 00:07:26.940 13913.797 - 14014.622: 99.2432% ( 4) 00:07:26.940 14014.622 - 14115.446: 99.2596% ( 3) 00:07:26.940 14115.446 - 14216.271: 99.2814% ( 4) 00:07:26.940 14216.271 - 14317.095: 99.3031% ( 4) 00:07:26.940 23996.258 - 24097.083: 99.3086% ( 1) 00:07:26.940 24097.083 - 24197.908: 99.3685% ( 11) 00:07:26.940 24197.908 - 24298.732: 99.3902% ( 4) 00:07:26.940 24298.732 - 24399.557: 99.4011% ( 2) 00:07:26.940 24399.557 - 24500.382: 99.4175% ( 3) 00:07:26.940 24500.382 - 24601.206: 99.4447% ( 5) 00:07:26.940 24601.206 - 24702.031: 99.4774% ( 6) 00:07:26.940 24702.031 - 24802.855: 99.5372% ( 11) 00:07:26.940 24802.855 - 24903.680: 99.5590% ( 4) 00:07:26.940 24903.680 - 25004.505: 99.5645% ( 1) 00:07:26.940 25004.505 - 25105.329: 99.5699% ( 1) 00:07:26.940 25105.329 - 25206.154: 99.5862% ( 3) 00:07:26.940 25206.154 - 25306.978: 99.5971% ( 2) 00:07:26.940 25306.978 - 25407.803: 99.6026% ( 1) 00:07:26.940 25609.452 - 25710.277: 99.6135% ( 2) 00:07:26.940 25710.277 - 25811.102: 99.6243% ( 2) 00:07:26.940 25811.102 - 26012.751: 99.6516% ( 5) 00:07:26.940 26819.348 - 27020.997: 99.6679% ( 3) 00:07:26.940 27020.997 - 27222.646: 99.7223% ( 10) 00:07:26.940 27222.646 - 27424.295: 99.7441% ( 4) 00:07:26.940 27424.295 - 27625.945: 99.7605% ( 3) 00:07:26.940 27625.945 - 27827.594: 99.7822% ( 4) 00:07:26.940 27827.594 - 28029.243: 99.8040% ( 4) 00:07:26.940 28029.243 - 28230.892: 99.8258% ( 4) 00:07:26.940 28230.892 - 28432.542: 99.8476% ( 4) 00:07:26.940 28432.542 - 28634.191: 99.8530% ( 1) 00:07:26.940 28634.191 - 28835.840: 99.8639% ( 2) 00:07:26.940 28835.840 - 29037.489: 99.9074% ( 8) 00:07:26.940 29037.489 - 29239.138: 99.9946% ( 16) 00:07:26.940 29239.138 - 29440.788: 100.0000% ( 1) 00:07:26.940 00:07:26.940 Latency histogram for PCIE (0000:00:11.0) NSID 1 from core 0: 00:07:26.940 ============================================================================== 00:07:26.940 Range in us Cumulative IO count 00:07:26.940 5822.622 - 5847.828: 0.0054% ( 1) 00:07:26.940 5898.240 - 5923.446: 0.0272% ( 4) 00:07:26.940 5923.446 - 5948.652: 0.0436% ( 3) 00:07:26.940 5948.652 - 5973.858: 0.0817% ( 7) 00:07:26.940 5973.858 - 5999.065: 0.1252% ( 8) 00:07:26.940 5999.065 - 6024.271: 0.1742% ( 9) 00:07:26.940 6024.271 - 6049.477: 0.2395% ( 12) 00:07:26.940 6049.477 - 6074.683: 0.3158% ( 14) 00:07:26.940 6074.683 - 6099.889: 0.4301% ( 21) 00:07:26.940 6099.889 - 6125.095: 0.5934% ( 30) 00:07:26.940 6125.095 - 6150.302: 0.7241% ( 24) 00:07:26.940 6150.302 - 6175.508: 0.9201% ( 36) 00:07:26.940 6175.508 - 6200.714: 1.1923% ( 50) 00:07:26.940 6200.714 - 6225.920: 1.4591% ( 49) 00:07:26.940 6225.920 - 6251.126: 1.7585% ( 55) 00:07:26.940 6251.126 - 6276.332: 2.3410% ( 107) 00:07:26.940 6276.332 - 6301.538: 3.1468% ( 148) 00:07:26.940 6301.538 - 6326.745: 4.3990% ( 230) 00:07:26.940 6326.745 - 6351.951: 6.0540% ( 304) 00:07:26.940 6351.951 - 6377.157: 8.1010% ( 376) 00:07:26.940 6377.157 - 6402.363: 10.2080% ( 387) 00:07:26.940 6402.363 - 6427.569: 13.1315% ( 537) 00:07:26.940 6427.569 - 6452.775: 16.8282% ( 679) 00:07:26.940 6452.775 - 6503.188: 25.4301% ( 1580) 00:07:26.940 6503.188 - 6553.600: 35.2842% ( 1810) 00:07:26.940 6553.600 - 6604.012: 44.7735% ( 1743) 00:07:26.940 6604.012 - 6654.425: 51.7476% ( 1281) 00:07:26.940 6654.425 - 6704.837: 58.2698% ( 1198) 00:07:26.940 6704.837 - 6755.249: 64.4382% ( 1133) 00:07:26.940 6755.249 - 6805.662: 68.6193% ( 768) 00:07:26.940 6805.662 - 6856.074: 72.3541% ( 686) 00:07:26.940 6856.074 - 6906.486: 74.8149% ( 452) 00:07:26.940 6906.486 - 6956.898: 77.8365% ( 555) 00:07:26.940 6956.898 - 7007.311: 79.7147% ( 345) 00:07:26.940 7007.311 - 7057.723: 80.9342% ( 224) 00:07:26.940 7057.723 - 7108.135: 81.7345% ( 147) 00:07:26.940 7108.135 - 7158.548: 82.7526% ( 187) 00:07:26.940 7158.548 - 7208.960: 83.9068% ( 212) 00:07:26.940 7208.960 - 7259.372: 84.7942% ( 163) 00:07:26.940 7259.372 - 7309.785: 85.5401% ( 137) 00:07:26.940 7309.785 - 7360.197: 86.2043% ( 122) 00:07:26.940 7360.197 - 7410.609: 86.7705% ( 104) 00:07:26.940 7410.609 - 7461.022: 87.3149% ( 100) 00:07:26.940 7461.022 - 7511.434: 88.0934% ( 143) 00:07:26.940 7511.434 - 7561.846: 88.7903% ( 128) 00:07:26.940 7561.846 - 7612.258: 89.2149% ( 78) 00:07:26.940 7612.258 - 7662.671: 89.8029% ( 108) 00:07:26.940 7662.671 - 7713.083: 90.0479% ( 45) 00:07:26.940 7713.083 - 7763.495: 90.4399% ( 72) 00:07:26.940 7763.495 - 7813.908: 91.0660% ( 115) 00:07:26.940 7813.908 - 7864.320: 91.5941% ( 97) 00:07:26.940 7864.320 - 7914.732: 92.2474% ( 120) 00:07:26.940 7914.732 - 7965.145: 92.9334% ( 126) 00:07:26.940 7965.145 - 8015.557: 93.6193% ( 126) 00:07:26.940 8015.557 - 8065.969: 94.1420% ( 96) 00:07:26.940 8065.969 - 8116.382: 94.3706% ( 42) 00:07:26.940 8116.382 - 8166.794: 94.5775% ( 38) 00:07:26.940 8166.794 - 8217.206: 94.7463% ( 31) 00:07:26.940 8217.206 - 8267.618: 94.8824% ( 25) 00:07:26.940 8267.618 - 8318.031: 95.1328% ( 46) 00:07:26.940 8318.031 - 8368.443: 95.2689% ( 25) 00:07:26.940 8368.443 - 8418.855: 95.4159% ( 27) 00:07:26.940 8418.855 - 8469.268: 95.6990% ( 52) 00:07:26.940 8469.268 - 8519.680: 95.9223% ( 41) 00:07:26.940 8519.680 - 8570.092: 96.0856% ( 30) 00:07:26.940 8570.092 - 8620.505: 96.3415% ( 47) 00:07:26.940 8620.505 - 8670.917: 96.7062% ( 67) 00:07:26.940 8670.917 - 8721.329: 96.9621% ( 47) 00:07:26.940 8721.329 - 8771.742: 97.1472% ( 34) 00:07:26.940 8771.742 - 8822.154: 97.3432% ( 36) 00:07:26.940 8822.154 - 8872.566: 97.5392% ( 36) 00:07:26.940 8872.566 - 8922.978: 97.6535% ( 21) 00:07:26.940 8922.978 - 8973.391: 97.7733% ( 22) 00:07:26.940 8973.391 - 9023.803: 97.8822% ( 20) 00:07:26.940 9023.803 - 9074.215: 97.9747% ( 17) 00:07:26.940 9074.215 - 9124.628: 98.0618% ( 16) 00:07:26.940 9124.628 - 9175.040: 98.1054% ( 8) 00:07:26.940 9175.040 - 9225.452: 98.1598% ( 10) 00:07:26.940 9225.452 - 9275.865: 98.2088% ( 9) 00:07:26.940 9275.865 - 9326.277: 98.2578% ( 9) 00:07:26.940 9326.277 - 9376.689: 98.2959% ( 7) 00:07:26.940 9376.689 - 9427.102: 98.3123% ( 3) 00:07:26.940 9427.102 - 9477.514: 98.3232% ( 2) 00:07:26.940 9477.514 - 9527.926: 98.3395% ( 3) 00:07:26.940 9527.926 - 9578.338: 98.3449% ( 1) 00:07:26.940 9578.338 - 9628.751: 98.3613% ( 3) 00:07:26.940 9628.751 - 9679.163: 98.3722% ( 2) 00:07:26.940 9679.163 - 9729.575: 98.3885% ( 3) 00:07:26.940 9729.575 - 9779.988: 98.3994% ( 2) 00:07:26.940 9779.988 - 9830.400: 98.4157% ( 3) 00:07:26.940 9830.400 - 9880.812: 98.4375% ( 4) 00:07:26.940 9880.812 - 9931.225: 98.4702% ( 6) 00:07:26.940 9931.225 - 9981.637: 98.5083% ( 7) 00:07:26.940 9981.637 - 10032.049: 98.5355% ( 5) 00:07:26.940 10032.049 - 10082.462: 98.5682% ( 6) 00:07:26.940 10082.462 - 10132.874: 98.5954% ( 5) 00:07:26.940 10132.874 - 10183.286: 98.6063% ( 2) 00:07:26.940 10788.234 - 10838.646: 98.6172% ( 2) 00:07:26.940 10838.646 - 10889.058: 98.6389% ( 4) 00:07:26.940 10889.058 - 10939.471: 98.6716% ( 6) 00:07:26.940 10939.471 - 10989.883: 98.7097% ( 7) 00:07:26.940 10989.883 - 11040.295: 98.7369% ( 5) 00:07:26.940 11040.295 - 11090.708: 98.8023% ( 12) 00:07:26.940 11090.708 - 11141.120: 98.8186% ( 3) 00:07:26.940 11141.120 - 11191.532: 98.8295% ( 2) 00:07:26.940 11191.532 - 11241.945: 98.8404% ( 2) 00:07:26.940 11241.945 - 11292.357: 98.8513% ( 2) 00:07:26.940 11292.357 - 11342.769: 98.8622% ( 2) 00:07:26.940 11342.769 - 11393.182: 98.8730% ( 2) 00:07:26.940 11393.182 - 11443.594: 98.8839% ( 2) 00:07:26.940 11443.594 - 11494.006: 98.8948% ( 2) 00:07:26.940 11494.006 - 11544.418: 98.9003% ( 1) 00:07:26.940 11544.418 - 11594.831: 98.9057% ( 1) 00:07:26.940 11594.831 - 11645.243: 98.9111% ( 1) 00:07:26.940 11645.243 - 11695.655: 98.9220% ( 2) 00:07:26.940 11695.655 - 11746.068: 98.9329% ( 2) 00:07:26.940 11746.068 - 11796.480: 98.9438% ( 2) 00:07:26.940 11796.480 - 11846.892: 98.9547% ( 2) 00:07:26.940 13611.323 - 13712.148: 98.9765% ( 4) 00:07:26.940 13712.148 - 13812.972: 99.0091% ( 6) 00:07:26.940 13812.972 - 13913.797: 99.0364% ( 5) 00:07:26.940 13913.797 - 14014.622: 99.0581% ( 4) 00:07:26.940 14014.622 - 14115.446: 99.0745% ( 3) 00:07:26.940 14115.446 - 14216.271: 99.0908% ( 3) 00:07:26.940 14216.271 - 14317.095: 99.2378% ( 27) 00:07:26.940 14317.095 - 14417.920: 99.2541% ( 3) 00:07:26.940 14417.920 - 14518.745: 99.2814% ( 5) 00:07:26.940 14518.745 - 14619.569: 99.3031% ( 4) 00:07:26.940 21878.942 - 21979.766: 99.3140% ( 2) 00:07:26.940 21979.766 - 22080.591: 99.3358% ( 4) 00:07:26.940 22080.591 - 22181.415: 99.3576% ( 4) 00:07:26.940 22181.415 - 22282.240: 99.3794% ( 4) 00:07:26.940 22282.240 - 22383.065: 99.3957% ( 3) 00:07:26.940 22383.065 - 22483.889: 99.4229% ( 5) 00:07:26.940 22483.889 - 22584.714: 99.4447% ( 4) 00:07:26.940 22584.714 - 22685.538: 99.4665% ( 4) 00:07:26.940 22685.538 - 22786.363: 99.4937% ( 5) 00:07:26.940 22786.363 - 22887.188: 99.5155% ( 4) 00:07:26.940 22887.188 - 22988.012: 99.5372% ( 4) 00:07:26.940 22988.012 - 23088.837: 99.5590% ( 4) 00:07:26.940 23088.837 - 23189.662: 99.5862% ( 5) 00:07:26.940 23189.662 - 23290.486: 99.6080% ( 4) 00:07:26.940 23290.486 - 23391.311: 99.6298% ( 4) 00:07:26.940 23391.311 - 23492.135: 99.6516% ( 4) 00:07:26.940 26214.400 - 26416.049: 99.6842% ( 6) 00:07:26.940 26416.049 - 26617.698: 99.7278% ( 8) 00:07:26.940 26617.698 - 26819.348: 99.7713% ( 8) 00:07:26.940 26819.348 - 27020.997: 99.8095% ( 7) 00:07:26.940 27020.997 - 27222.646: 99.8584% ( 9) 00:07:26.940 27222.646 - 27424.295: 99.9074% ( 9) 00:07:26.940 27424.295 - 27625.945: 99.9564% ( 9) 00:07:26.940 27625.945 - 27827.594: 100.0000% ( 8) 00:07:26.940 00:07:26.940 Latency histogram for PCIE (0000:00:13.0) NSID 1 from core 0: 00:07:26.940 ============================================================================== 00:07:26.940 Range in us Cumulative IO count 00:07:26.940 5923.446 - 5948.652: 0.0163% ( 3) 00:07:26.940 5948.652 - 5973.858: 0.0544% ( 7) 00:07:26.940 5973.858 - 5999.065: 0.0926% ( 7) 00:07:26.940 5999.065 - 6024.271: 0.1361% ( 8) 00:07:26.940 6024.271 - 6049.477: 0.1851% ( 9) 00:07:26.940 6049.477 - 6074.683: 0.3049% ( 22) 00:07:26.940 6074.683 - 6099.889: 0.3920% ( 16) 00:07:26.940 6099.889 - 6125.095: 0.4736% ( 15) 00:07:26.940 6125.095 - 6150.302: 0.5825% ( 20) 00:07:26.940 6150.302 - 6175.508: 0.7459% ( 30) 00:07:26.940 6175.508 - 6200.714: 0.9909% ( 45) 00:07:26.940 6200.714 - 6225.920: 1.3338% ( 63) 00:07:26.940 6225.920 - 6251.126: 1.8783% ( 100) 00:07:26.940 6251.126 - 6276.332: 2.5697% ( 127) 00:07:26.940 6276.332 - 6301.538: 3.5388% ( 178) 00:07:26.940 6301.538 - 6326.745: 4.6603% ( 206) 00:07:26.940 6326.745 - 6351.951: 6.6311% ( 362) 00:07:26.940 6351.951 - 6377.157: 8.8251% ( 403) 00:07:26.940 6377.157 - 6402.363: 11.1335% ( 424) 00:07:26.940 6402.363 - 6427.569: 13.8175% ( 493) 00:07:26.941 6427.569 - 6452.775: 17.4597% ( 669) 00:07:26.941 6452.775 - 6503.188: 25.8384% ( 1539) 00:07:26.941 6503.188 - 6553.600: 34.8378% ( 1653) 00:07:26.941 6553.600 - 6604.012: 44.5503% ( 1784) 00:07:26.941 6604.012 - 6654.425: 51.7422% ( 1321) 00:07:26.941 6654.425 - 6704.837: 58.6618% ( 1271) 00:07:26.941 6704.837 - 6755.249: 64.7757% ( 1123) 00:07:26.941 6755.249 - 6805.662: 69.4850% ( 865) 00:07:26.941 6805.662 - 6856.074: 72.7134% ( 593) 00:07:26.941 6856.074 - 6906.486: 75.3158% ( 478) 00:07:26.941 6906.486 - 6956.898: 77.5152% ( 404) 00:07:26.941 6956.898 - 7007.311: 79.0342% ( 279) 00:07:26.941 7007.311 - 7057.723: 80.4443% ( 259) 00:07:26.941 7057.723 - 7108.135: 81.5821% ( 209) 00:07:26.941 7108.135 - 7158.548: 82.8288% ( 229) 00:07:26.941 7158.548 - 7208.960: 83.5475% ( 132) 00:07:26.941 7208.960 - 7259.372: 84.5002% ( 175) 00:07:26.941 7259.372 - 7309.785: 85.3985% ( 165) 00:07:26.941 7309.785 - 7360.197: 85.9321% ( 98) 00:07:26.941 7360.197 - 7410.609: 86.5690% ( 117) 00:07:26.941 7410.609 - 7461.022: 87.1788% ( 112) 00:07:26.941 7461.022 - 7511.434: 87.8158% ( 117) 00:07:26.941 7511.434 - 7561.846: 88.3275% ( 94) 00:07:26.941 7561.846 - 7612.258: 89.0135% ( 126) 00:07:26.941 7612.258 - 7662.671: 89.4218% ( 75) 00:07:26.941 7662.671 - 7713.083: 90.1241% ( 129) 00:07:26.941 7713.083 - 7763.495: 90.7230% ( 110) 00:07:26.941 7763.495 - 7813.908: 91.1912% ( 86) 00:07:26.941 7813.908 - 7864.320: 91.7520% ( 103) 00:07:26.941 7864.320 - 7914.732: 92.1712% ( 77) 00:07:26.941 7914.732 - 7965.145: 92.4543% ( 52) 00:07:26.941 7965.145 - 8015.557: 92.9987% ( 100) 00:07:26.941 8015.557 - 8065.969: 93.4233% ( 78) 00:07:26.941 8065.969 - 8116.382: 93.7446% ( 59) 00:07:26.941 8116.382 - 8166.794: 94.1910% ( 82) 00:07:26.941 8166.794 - 8217.206: 94.5231% ( 61) 00:07:26.941 8217.206 - 8267.618: 94.8933% ( 68) 00:07:26.941 8267.618 - 8318.031: 95.2036% ( 57) 00:07:26.941 8318.031 - 8368.443: 95.4595% ( 47) 00:07:26.941 8368.443 - 8418.855: 95.6392% ( 33) 00:07:26.941 8418.855 - 8469.268: 95.9277% ( 53) 00:07:26.941 8469.268 - 8519.680: 96.1890% ( 48) 00:07:26.941 8519.680 - 8570.092: 96.5538% ( 67) 00:07:26.941 8570.092 - 8620.505: 96.7933% ( 44) 00:07:26.941 8620.505 - 8670.917: 97.0166% ( 41) 00:07:26.941 8670.917 - 8721.329: 97.1744% ( 29) 00:07:26.941 8721.329 - 8771.742: 97.2997% ( 23) 00:07:26.941 8771.742 - 8822.154: 97.4085% ( 20) 00:07:26.941 8822.154 - 8872.566: 97.5120% ( 19) 00:07:26.941 8872.566 - 8922.978: 97.5828% ( 13) 00:07:26.941 8922.978 - 8973.391: 97.7134% ( 24) 00:07:26.941 8973.391 - 9023.803: 97.8767% ( 30) 00:07:26.941 9023.803 - 9074.215: 97.9475% ( 13) 00:07:26.941 9074.215 - 9124.628: 98.0237% ( 14) 00:07:26.941 9124.628 - 9175.040: 98.1000% ( 14) 00:07:26.941 9175.040 - 9225.452: 98.1598% ( 11) 00:07:26.941 9225.452 - 9275.865: 98.1980% ( 7) 00:07:26.941 9275.865 - 9326.277: 98.2361% ( 7) 00:07:26.941 9326.277 - 9376.689: 98.2524% ( 3) 00:07:26.941 9376.689 - 9427.102: 98.2578% ( 1) 00:07:26.941 9628.751 - 9679.163: 98.2633% ( 1) 00:07:26.941 9679.163 - 9729.575: 98.2687% ( 1) 00:07:26.941 9729.575 - 9779.988: 98.2796% ( 2) 00:07:26.941 9779.988 - 9830.400: 98.2905% ( 2) 00:07:26.941 9830.400 - 9880.812: 98.3068% ( 3) 00:07:26.941 9880.812 - 9931.225: 98.3123% ( 1) 00:07:26.941 9931.225 - 9981.637: 98.3286% ( 3) 00:07:26.941 9981.637 - 10032.049: 98.3395% ( 2) 00:07:26.941 10032.049 - 10082.462: 98.3558% ( 3) 00:07:26.941 10082.462 - 10132.874: 98.3613% ( 1) 00:07:26.941 10132.874 - 10183.286: 98.3831% ( 4) 00:07:26.941 10183.286 - 10233.698: 98.3939% ( 2) 00:07:26.941 10233.698 - 10284.111: 98.4103% ( 3) 00:07:26.941 10284.111 - 10334.523: 98.4321% ( 4) 00:07:26.941 10334.523 - 10384.935: 98.4702% ( 7) 00:07:26.941 10384.935 - 10435.348: 98.4865% ( 3) 00:07:26.941 10435.348 - 10485.760: 98.4974% ( 2) 00:07:26.941 10485.760 - 10536.172: 98.5137% ( 3) 00:07:26.941 10536.172 - 10586.585: 98.5573% ( 8) 00:07:26.941 10586.585 - 10636.997: 98.6008% ( 8) 00:07:26.941 10636.997 - 10687.409: 98.6553% ( 10) 00:07:26.941 10687.409 - 10737.822: 98.7206% ( 12) 00:07:26.941 10737.822 - 10788.234: 98.7914% ( 13) 00:07:26.941 10788.234 - 10838.646: 98.8132% ( 4) 00:07:26.941 10838.646 - 10889.058: 98.8240% ( 2) 00:07:26.941 10889.058 - 10939.471: 98.8349% ( 2) 00:07:26.941 10939.471 - 10989.883: 98.8458% ( 2) 00:07:26.941 10989.883 - 11040.295: 98.8513% ( 1) 00:07:26.941 11040.295 - 11090.708: 98.8622% ( 2) 00:07:26.941 11090.708 - 11141.120: 98.8730% ( 2) 00:07:26.941 11141.120 - 11191.532: 98.8785% ( 1) 00:07:26.941 11191.532 - 11241.945: 98.8894% ( 2) 00:07:26.941 11241.945 - 11292.357: 98.9003% ( 2) 00:07:26.941 11292.357 - 11342.769: 98.9057% ( 1) 00:07:26.941 11342.769 - 11393.182: 98.9166% ( 2) 00:07:26.941 11393.182 - 11443.594: 98.9275% ( 2) 00:07:26.941 11443.594 - 11494.006: 98.9384% ( 2) 00:07:26.941 11494.006 - 11544.418: 98.9438% ( 1) 00:07:26.941 11544.418 - 11594.831: 98.9547% ( 2) 00:07:26.941 13611.323 - 13712.148: 98.9601% ( 1) 00:07:26.941 13712.148 - 13812.972: 98.9928% ( 6) 00:07:26.941 13812.972 - 13913.797: 99.0527% ( 11) 00:07:26.941 13913.797 - 14014.622: 99.2269% ( 32) 00:07:26.941 14014.622 - 14115.446: 99.2596% ( 6) 00:07:26.941 14115.446 - 14216.271: 99.2922% ( 6) 00:07:26.941 14216.271 - 14317.095: 99.3031% ( 2) 00:07:26.941 20164.923 - 20265.748: 99.3086% ( 1) 00:07:26.941 20265.748 - 20366.572: 99.3304% ( 4) 00:07:26.941 20366.572 - 20467.397: 99.3521% ( 4) 00:07:26.941 20467.397 - 20568.222: 99.3739% ( 4) 00:07:26.941 20568.222 - 20669.046: 99.3957% ( 4) 00:07:26.941 20669.046 - 20769.871: 99.4175% ( 4) 00:07:26.941 20769.871 - 20870.695: 99.4392% ( 4) 00:07:26.941 20870.695 - 20971.520: 99.4610% ( 4) 00:07:26.941 20971.520 - 21072.345: 99.4828% ( 4) 00:07:26.941 21072.345 - 21173.169: 99.5046% ( 4) 00:07:26.941 21173.169 - 21273.994: 99.5264% ( 4) 00:07:26.941 21273.994 - 21374.818: 99.5536% ( 5) 00:07:26.941 21374.818 - 21475.643: 99.5753% ( 4) 00:07:26.941 21475.643 - 21576.468: 99.5971% ( 4) 00:07:26.941 21576.468 - 21677.292: 99.6189% ( 4) 00:07:26.941 21677.292 - 21778.117: 99.6461% ( 5) 00:07:26.941 21778.117 - 21878.942: 99.6516% ( 1) 00:07:26.941 24702.031 - 24802.855: 99.6733% ( 4) 00:07:26.941 24802.855 - 24903.680: 99.6951% ( 4) 00:07:26.941 24903.680 - 25004.505: 99.7169% ( 4) 00:07:26.941 25004.505 - 25105.329: 99.7387% ( 4) 00:07:26.941 25105.329 - 25206.154: 99.7605% ( 4) 00:07:26.941 25206.154 - 25306.978: 99.7768% ( 3) 00:07:26.941 25306.978 - 25407.803: 99.7986% ( 4) 00:07:26.941 25407.803 - 25508.628: 99.8203% ( 4) 00:07:26.941 25508.628 - 25609.452: 99.8421% ( 4) 00:07:26.941 25609.452 - 25710.277: 99.8639% ( 4) 00:07:26.941 25710.277 - 25811.102: 99.8911% ( 5) 00:07:26.941 25811.102 - 26012.751: 99.9347% ( 8) 00:07:26.941 26012.751 - 26214.400: 99.9782% ( 8) 00:07:26.941 26214.400 - 26416.049: 100.0000% ( 4) 00:07:26.941 00:07:26.941 Latency histogram for PCIE (0000:00:12.0) NSID 1 from core 0: 00:07:26.941 ============================================================================== 00:07:26.941 Range in us Cumulative IO count 00:07:26.941 5847.828 - 5873.034: 0.0054% ( 1) 00:07:26.941 5873.034 - 5898.240: 0.0109% ( 1) 00:07:26.941 5923.446 - 5948.652: 0.0327% ( 4) 00:07:26.941 5948.652 - 5973.858: 0.0599% ( 5) 00:07:26.941 5973.858 - 5999.065: 0.0817% ( 4) 00:07:26.941 5999.065 - 6024.271: 0.1633% ( 15) 00:07:26.941 6024.271 - 6049.477: 0.2450% ( 15) 00:07:26.941 6049.477 - 6074.683: 0.3375% ( 17) 00:07:26.941 6074.683 - 6099.889: 0.4410% ( 19) 00:07:26.941 6099.889 - 6125.095: 0.5771% ( 25) 00:07:26.941 6125.095 - 6150.302: 0.7132% ( 25) 00:07:26.941 6150.302 - 6175.508: 0.8983% ( 34) 00:07:26.941 6175.508 - 6200.714: 1.2032% ( 56) 00:07:26.941 6200.714 - 6225.920: 1.5625% ( 66) 00:07:26.941 6225.920 - 6251.126: 2.0307% ( 86) 00:07:26.941 6251.126 - 6276.332: 2.6350% ( 111) 00:07:26.941 6276.332 - 6301.538: 3.4898% ( 157) 00:07:26.941 6301.538 - 6326.745: 5.0196% ( 281) 00:07:26.941 6326.745 - 6351.951: 6.7454% ( 317) 00:07:26.941 6351.951 - 6377.157: 8.4930% ( 321) 00:07:26.941 6377.157 - 6402.363: 11.2424% ( 505) 00:07:26.941 6402.363 - 6427.569: 14.2857% ( 559) 00:07:26.941 6427.569 - 6452.775: 17.2311% ( 541) 00:07:26.941 6452.775 - 6503.188: 25.2940% ( 1481) 00:07:26.941 6503.188 - 6553.600: 35.4693% ( 1869) 00:07:26.941 6553.600 - 6604.012: 43.8099% ( 1532) 00:07:26.941 6604.012 - 6654.425: 51.4101% ( 1396) 00:07:26.941 6654.425 - 6704.837: 58.0248% ( 1215) 00:07:26.941 6704.837 - 6755.249: 64.5089% ( 1191) 00:07:26.941 6755.249 - 6805.662: 69.3924% ( 897) 00:07:26.941 6805.662 - 6856.074: 72.7842% ( 623) 00:07:26.941 6856.074 - 6906.486: 75.7676% ( 548) 00:07:26.941 6906.486 - 6956.898: 77.4771% ( 314) 00:07:26.941 6956.898 - 7007.311: 79.3282% ( 340) 00:07:26.941 7007.311 - 7057.723: 80.6892% ( 250) 00:07:26.941 7057.723 - 7108.135: 82.2572% ( 288) 00:07:26.941 7108.135 - 7158.548: 83.0793% ( 151) 00:07:26.941 7158.548 - 7208.960: 83.6455% ( 104) 00:07:26.941 7208.960 - 7259.372: 84.3696% ( 133) 00:07:26.941 7259.372 - 7309.785: 85.0065% ( 117) 00:07:26.941 7309.785 - 7360.197: 85.7088% ( 129) 00:07:26.941 7360.197 - 7410.609: 86.3186% ( 112) 00:07:26.941 7410.609 - 7461.022: 87.0862% ( 141) 00:07:26.941 7461.022 - 7511.434: 87.7777% ( 127) 00:07:26.941 7511.434 - 7561.846: 88.3330% ( 102) 00:07:26.941 7561.846 - 7612.258: 89.0353% ( 129) 00:07:26.941 7612.258 - 7662.671: 89.9826% ( 174) 00:07:26.941 7662.671 - 7713.083: 90.4780% ( 91) 00:07:26.941 7713.083 - 7763.495: 90.8319% ( 65) 00:07:26.941 7763.495 - 7813.908: 91.3436% ( 94) 00:07:26.941 7813.908 - 7864.320: 91.6594% ( 58) 00:07:26.941 7864.320 - 7914.732: 92.0623% ( 74) 00:07:26.941 7914.732 - 7965.145: 92.5795% ( 95) 00:07:26.941 7965.145 - 8015.557: 92.8354% ( 47) 00:07:26.941 8015.557 - 8065.969: 93.0695% ( 43) 00:07:26.941 8065.969 - 8116.382: 93.3580% ( 53) 00:07:26.941 8116.382 - 8166.794: 93.6466% ( 53) 00:07:26.941 8166.794 - 8217.206: 93.9895% ( 63) 00:07:26.941 8217.206 - 8267.618: 94.4959% ( 93) 00:07:26.941 8267.618 - 8318.031: 94.8497% ( 65) 00:07:26.941 8318.031 - 8368.443: 95.1437% ( 54) 00:07:26.941 8368.443 - 8418.855: 95.5139% ( 68) 00:07:26.941 8418.855 - 8469.268: 95.8406% ( 60) 00:07:26.941 8469.268 - 8519.680: 96.1836% ( 63) 00:07:26.941 8519.680 - 8570.092: 96.5266% ( 63) 00:07:26.941 8570.092 - 8620.505: 96.7824% ( 47) 00:07:26.941 8620.505 - 8670.917: 96.9730% ( 35) 00:07:26.941 8670.917 - 8721.329: 97.0982% ( 23) 00:07:26.941 8721.329 - 8771.742: 97.2507% ( 28) 00:07:26.941 8771.742 - 8822.154: 97.4085% ( 29) 00:07:26.941 8822.154 - 8872.566: 97.5283% ( 22) 00:07:26.941 8872.566 - 8922.978: 97.6263% ( 18) 00:07:26.941 8922.978 - 8973.391: 97.7297% ( 19) 00:07:26.941 8973.391 - 9023.803: 97.8876% ( 29) 00:07:26.941 9023.803 - 9074.215: 97.9639% ( 14) 00:07:26.941 9074.215 - 9124.628: 98.0128% ( 9) 00:07:26.941 9124.628 - 9175.040: 98.0510% ( 7) 00:07:26.941 9175.040 - 9225.452: 98.1000% ( 9) 00:07:26.941 9225.452 - 9275.865: 98.1490% ( 9) 00:07:26.941 9275.865 - 9326.277: 98.1925% ( 8) 00:07:26.941 9326.277 - 9376.689: 98.2197% ( 5) 00:07:26.941 9376.689 - 9427.102: 98.2524% ( 6) 00:07:26.941 9427.102 - 9477.514: 98.2578% ( 1) 00:07:26.941 9880.812 - 9931.225: 98.2633% ( 1) 00:07:26.941 9931.225 - 9981.637: 98.2796% ( 3) 00:07:26.941 9981.637 - 10032.049: 98.3014% ( 4) 00:07:26.941 10032.049 - 10082.462: 98.3776% ( 14) 00:07:26.941 10082.462 - 10132.874: 98.4321% ( 10) 00:07:26.941 10132.874 - 10183.286: 98.4538% ( 4) 00:07:26.941 10183.286 - 10233.698: 98.4647% ( 2) 00:07:26.941 10233.698 - 10284.111: 98.4756% ( 2) 00:07:26.941 10284.111 - 10334.523: 98.4865% ( 2) 00:07:26.941 10334.523 - 10384.935: 98.4919% ( 1) 00:07:26.941 10384.935 - 10435.348: 98.5083% ( 3) 00:07:26.941 10435.348 - 10485.760: 98.5246% ( 3) 00:07:26.941 10485.760 - 10536.172: 98.5464% ( 4) 00:07:26.941 10536.172 - 10586.585: 98.5682% ( 4) 00:07:26.941 10586.585 - 10636.997: 98.5899% ( 4) 00:07:26.941 10636.997 - 10687.409: 98.6172% ( 5) 00:07:26.941 10687.409 - 10737.822: 98.6280% ( 2) 00:07:26.941 10737.822 - 10788.234: 98.6553% ( 5) 00:07:26.941 10788.234 - 10838.646: 98.6770% ( 4) 00:07:26.941 10838.646 - 10889.058: 98.6988% ( 4) 00:07:26.941 10889.058 - 10939.471: 98.7152% ( 3) 00:07:26.941 10939.471 - 10989.883: 98.7369% ( 4) 00:07:26.941 10989.883 - 11040.295: 98.7424% ( 1) 00:07:26.942 11040.295 - 11090.708: 98.7642% ( 4) 00:07:26.942 11090.708 - 11141.120: 98.7968% ( 6) 00:07:26.942 11141.120 - 11191.532: 98.8404% ( 8) 00:07:26.942 11191.532 - 11241.945: 98.8785% ( 7) 00:07:26.942 11241.945 - 11292.357: 98.9111% ( 6) 00:07:26.942 11292.357 - 11342.769: 98.9438% ( 6) 00:07:26.942 11342.769 - 11393.182: 98.9547% ( 2) 00:07:26.942 13510.498 - 13611.323: 98.9710% ( 3) 00:07:26.942 13611.323 - 13712.148: 99.1126% ( 26) 00:07:26.942 13712.148 - 13812.972: 99.1779% ( 12) 00:07:26.942 13812.972 - 13913.797: 99.2051% ( 5) 00:07:26.942 13913.797 - 14014.622: 99.2269% ( 4) 00:07:26.942 14014.622 - 14115.446: 99.2650% ( 7) 00:07:26.942 14115.446 - 14216.271: 99.2977% ( 6) 00:07:26.942 14216.271 - 14317.095: 99.3031% ( 1) 00:07:26.942 18450.905 - 18551.729: 99.3086% ( 1) 00:07:26.942 18551.729 - 18652.554: 99.3304% ( 4) 00:07:26.942 18652.554 - 18753.378: 99.3521% ( 4) 00:07:26.942 18753.378 - 18854.203: 99.3739% ( 4) 00:07:26.942 18854.203 - 18955.028: 99.3957% ( 4) 00:07:26.942 18955.028 - 19055.852: 99.4229% ( 5) 00:07:26.942 19055.852 - 19156.677: 99.4447% ( 4) 00:07:26.942 19156.677 - 19257.502: 99.4665% ( 4) 00:07:26.942 19257.502 - 19358.326: 99.4882% ( 4) 00:07:26.942 19358.326 - 19459.151: 99.5155% ( 5) 00:07:26.942 19459.151 - 19559.975: 99.5372% ( 4) 00:07:26.942 19559.975 - 19660.800: 99.5590% ( 4) 00:07:26.942 19660.800 - 19761.625: 99.5808% ( 4) 00:07:26.942 19761.625 - 19862.449: 99.6026% ( 4) 00:07:26.942 19862.449 - 19963.274: 99.6243% ( 4) 00:07:26.942 19963.274 - 20064.098: 99.6461% ( 4) 00:07:26.942 20064.098 - 20164.923: 99.6516% ( 1) 00:07:26.942 22887.188 - 22988.012: 99.6570% ( 1) 00:07:26.942 22988.012 - 23088.837: 99.6788% ( 4) 00:07:26.942 23088.837 - 23189.662: 99.7006% ( 4) 00:07:26.942 23189.662 - 23290.486: 99.7223% ( 4) 00:07:26.942 23290.486 - 23391.311: 99.7441% ( 4) 00:07:26.942 23391.311 - 23492.135: 99.7713% ( 5) 00:07:26.942 23492.135 - 23592.960: 99.7931% ( 4) 00:07:26.942 23592.960 - 23693.785: 99.8149% ( 4) 00:07:26.942 23693.785 - 23794.609: 99.8367% ( 4) 00:07:26.942 23794.609 - 23895.434: 99.8584% ( 4) 00:07:26.942 23895.434 - 23996.258: 99.8802% ( 4) 00:07:26.942 23996.258 - 24097.083: 99.9020% ( 4) 00:07:26.942 24097.083 - 24197.908: 99.9238% ( 4) 00:07:26.942 24197.908 - 24298.732: 99.9510% ( 5) 00:07:26.942 24298.732 - 24399.557: 99.9728% ( 4) 00:07:26.942 24399.557 - 24500.382: 99.9946% ( 4) 00:07:26.942 24500.382 - 24601.206: 100.0000% ( 1) 00:07:26.942 00:07:26.942 Latency histogram for PCIE (0000:00:12.0) NSID 2 from core 0: 00:07:26.942 ============================================================================== 00:07:26.942 Range in us Cumulative IO count 00:07:26.942 5847.828 - 5873.034: 0.0054% ( 1) 00:07:26.942 5873.034 - 5898.240: 0.0109% ( 1) 00:07:26.942 5898.240 - 5923.446: 0.0218% ( 2) 00:07:26.942 5923.446 - 5948.652: 0.0490% ( 5) 00:07:26.942 5948.652 - 5973.858: 0.0708% ( 4) 00:07:26.942 5973.858 - 5999.065: 0.1143% ( 8) 00:07:26.942 5999.065 - 6024.271: 0.1851% ( 13) 00:07:26.942 6024.271 - 6049.477: 0.2559% ( 13) 00:07:26.942 6049.477 - 6074.683: 0.3484% ( 17) 00:07:26.942 6074.683 - 6099.889: 0.4301% ( 15) 00:07:26.942 6099.889 - 6125.095: 0.5281% ( 18) 00:07:26.942 6125.095 - 6150.302: 0.6805% ( 28) 00:07:26.942 6150.302 - 6175.508: 0.8765% ( 36) 00:07:26.942 6175.508 - 6200.714: 1.1868% ( 57) 00:07:26.942 6200.714 - 6225.920: 1.5625% ( 69) 00:07:26.942 6225.920 - 6251.126: 2.0579% ( 91) 00:07:26.942 6251.126 - 6276.332: 2.6840% ( 115) 00:07:26.942 6276.332 - 6301.538: 3.5932% ( 167) 00:07:26.942 6301.538 - 6326.745: 4.8399% ( 229) 00:07:26.942 6326.745 - 6351.951: 6.4297% ( 292) 00:07:26.942 6351.951 - 6377.157: 8.5311% ( 386) 00:07:26.942 6377.157 - 6402.363: 11.1662% ( 484) 00:07:26.942 6402.363 - 6427.569: 13.6651% ( 459) 00:07:26.942 6427.569 - 6452.775: 16.6159% ( 542) 00:07:26.942 6452.775 - 6503.188: 25.1688% ( 1571) 00:07:26.942 6503.188 - 6553.600: 34.8976% ( 1787) 00:07:26.942 6553.600 - 6604.012: 43.2437% ( 1533) 00:07:26.942 6604.012 - 6654.425: 50.5499% ( 1342) 00:07:26.942 6654.425 - 6704.837: 57.2681% ( 1234) 00:07:26.942 6704.837 - 6755.249: 64.3020% ( 1292) 00:07:26.942 6755.249 - 6805.662: 69.3543% ( 928) 00:07:26.942 6805.662 - 6856.074: 73.1108% ( 690) 00:07:26.942 6856.074 - 6906.486: 76.1433% ( 557) 00:07:26.942 6906.486 - 6956.898: 78.2230% ( 382) 00:07:26.942 6956.898 - 7007.311: 79.9543% ( 318) 00:07:26.942 7007.311 - 7057.723: 81.5113% ( 286) 00:07:26.942 7057.723 - 7108.135: 82.4151% ( 166) 00:07:26.942 7108.135 - 7158.548: 83.0303% ( 113) 00:07:26.942 7158.548 - 7208.960: 83.7435% ( 131) 00:07:26.942 7208.960 - 7259.372: 84.4948% ( 138) 00:07:26.942 7259.372 - 7309.785: 85.3550% ( 158) 00:07:26.942 7309.785 - 7360.197: 85.9647% ( 112) 00:07:26.942 7360.197 - 7410.609: 86.6834% ( 132) 00:07:26.942 7410.609 - 7461.022: 87.3966% ( 131) 00:07:26.942 7461.022 - 7511.434: 88.0009% ( 111) 00:07:26.942 7511.434 - 7561.846: 88.5126% ( 94) 00:07:26.942 7561.846 - 7612.258: 89.3293% ( 150) 00:07:26.942 7612.258 - 7662.671: 90.1459% ( 150) 00:07:26.942 7662.671 - 7713.083: 90.5216% ( 69) 00:07:26.942 7713.083 - 7763.495: 91.0170% ( 91) 00:07:26.942 7763.495 - 7813.908: 91.4199% ( 74) 00:07:26.942 7813.908 - 7864.320: 91.6703% ( 46) 00:07:26.942 7864.320 - 7914.732: 91.9534% ( 52) 00:07:26.942 7914.732 - 7965.145: 92.2528% ( 55) 00:07:26.942 7965.145 - 8015.557: 92.4597% ( 38) 00:07:26.942 8015.557 - 8065.969: 92.7646% ( 56) 00:07:26.942 8065.969 - 8116.382: 93.0912% ( 60) 00:07:26.942 8116.382 - 8166.794: 93.3635% ( 50) 00:07:26.942 8166.794 - 8217.206: 93.7337% ( 68) 00:07:26.942 8217.206 - 8267.618: 94.2944% ( 103) 00:07:26.942 8267.618 - 8318.031: 94.5993% ( 56) 00:07:26.942 8318.031 - 8368.443: 94.7844% ( 34) 00:07:26.942 8368.443 - 8418.855: 95.0947% ( 57) 00:07:26.942 8418.855 - 8469.268: 95.4813% ( 71) 00:07:26.942 8469.268 - 8519.680: 95.8079% ( 60) 00:07:26.942 8519.680 - 8570.092: 96.0475% ( 44) 00:07:26.942 8570.092 - 8620.505: 96.2816% ( 43) 00:07:26.942 8620.505 - 8670.917: 96.5102% ( 42) 00:07:26.942 8670.917 - 8721.329: 96.8151% ( 56) 00:07:26.942 8721.329 - 8771.742: 97.1309% ( 58) 00:07:26.942 8771.742 - 8822.154: 97.2942% ( 30) 00:07:26.942 8822.154 - 8872.566: 97.4412% ( 27) 00:07:26.942 8872.566 - 8922.978: 97.5610% ( 22) 00:07:26.942 8922.978 - 8973.391: 97.6481% ( 16) 00:07:26.942 8973.391 - 9023.803: 97.7243% ( 14) 00:07:26.942 9023.803 - 9074.215: 97.9149% ( 35) 00:07:26.942 9074.215 - 9124.628: 98.0074% ( 17) 00:07:26.942 9124.628 - 9175.040: 98.0564% ( 9) 00:07:26.942 9175.040 - 9225.452: 98.1108% ( 10) 00:07:26.942 9225.452 - 9275.865: 98.1816% ( 13) 00:07:26.942 9275.865 - 9326.277: 98.2524% ( 13) 00:07:26.942 9326.277 - 9376.689: 98.3123% ( 11) 00:07:26.942 9376.689 - 9427.102: 98.3449% ( 6) 00:07:26.942 9427.102 - 9477.514: 98.3885% ( 8) 00:07:26.942 9477.514 - 9527.926: 98.4538% ( 12) 00:07:26.942 9527.926 - 9578.338: 98.4756% ( 4) 00:07:26.942 9578.338 - 9628.751: 98.4811% ( 1) 00:07:26.942 9628.751 - 9679.163: 98.4865% ( 1) 00:07:26.942 9679.163 - 9729.575: 98.4974% ( 2) 00:07:26.942 9729.575 - 9779.988: 98.5028% ( 1) 00:07:26.942 9779.988 - 9830.400: 98.5137% ( 2) 00:07:26.942 9830.400 - 9880.812: 98.5192% ( 1) 00:07:26.942 9880.812 - 9931.225: 98.5301% ( 2) 00:07:26.942 9931.225 - 9981.637: 98.5409% ( 2) 00:07:26.942 9981.637 - 10032.049: 98.5518% ( 2) 00:07:26.942 10032.049 - 10082.462: 98.5627% ( 2) 00:07:26.942 10082.462 - 10132.874: 98.5736% ( 2) 00:07:26.942 10132.874 - 10183.286: 98.5845% ( 2) 00:07:26.942 10183.286 - 10233.698: 98.5899% ( 1) 00:07:26.942 10233.698 - 10284.111: 98.6063% ( 3) 00:07:26.942 11040.295 - 11090.708: 98.6117% ( 1) 00:07:26.942 11141.120 - 11191.532: 98.6226% ( 2) 00:07:26.942 11191.532 - 11241.945: 98.6335% ( 2) 00:07:26.942 11241.945 - 11292.357: 98.6444% ( 2) 00:07:26.942 11292.357 - 11342.769: 98.6607% ( 3) 00:07:26.942 11342.769 - 11393.182: 98.6716% ( 2) 00:07:26.942 11393.182 - 11443.594: 98.6825% ( 2) 00:07:26.942 11443.594 - 11494.006: 98.6934% ( 2) 00:07:26.942 11494.006 - 11544.418: 98.7043% ( 2) 00:07:26.942 11544.418 - 11594.831: 98.7152% ( 2) 00:07:26.942 11594.831 - 11645.243: 98.7260% ( 2) 00:07:26.942 11645.243 - 11695.655: 98.7369% ( 2) 00:07:26.942 11695.655 - 11746.068: 98.7424% ( 1) 00:07:26.942 11746.068 - 11796.480: 98.7533% ( 2) 00:07:26.942 11796.480 - 11846.892: 98.7696% ( 3) 00:07:26.942 11846.892 - 11897.305: 98.7859% ( 3) 00:07:26.942 11897.305 - 11947.717: 98.8023% ( 3) 00:07:26.942 11947.717 - 11998.129: 98.8240% ( 4) 00:07:26.942 11998.129 - 12048.542: 98.8567% ( 6) 00:07:26.942 12048.542 - 12098.954: 98.8894% ( 6) 00:07:26.942 12098.954 - 12149.366: 98.9220% ( 6) 00:07:26.942 12149.366 - 12199.778: 98.9329% ( 2) 00:07:26.942 12199.778 - 12250.191: 98.9384% ( 1) 00:07:26.942 12250.191 - 12300.603: 98.9493% ( 2) 00:07:26.942 12300.603 - 12351.015: 98.9547% ( 1) 00:07:26.942 13006.375 - 13107.200: 99.0255% ( 13) 00:07:26.942 13107.200 - 13208.025: 99.1398% ( 21) 00:07:26.942 13208.025 - 13308.849: 99.1616% ( 4) 00:07:26.942 13308.849 - 13409.674: 99.1834% ( 4) 00:07:26.942 13409.674 - 13510.498: 99.1997% ( 3) 00:07:26.942 13510.498 - 13611.323: 99.2215% ( 4) 00:07:26.942 13611.323 - 13712.148: 99.2378% ( 3) 00:07:26.942 13712.148 - 13812.972: 99.2596% ( 4) 00:07:26.942 13812.972 - 13913.797: 99.2650% ( 1) 00:07:26.942 13913.797 - 14014.622: 99.2814% ( 3) 00:07:26.942 14014.622 - 14115.446: 99.3031% ( 4) 00:07:26.942 16636.062 - 16736.886: 99.3086% ( 1) 00:07:26.942 16736.886 - 16837.711: 99.3304% ( 4) 00:07:26.942 16837.711 - 16938.535: 99.3521% ( 4) 00:07:26.942 16938.535 - 17039.360: 99.3739% ( 4) 00:07:26.942 17039.360 - 17140.185: 99.4011% ( 5) 00:07:26.942 17140.185 - 17241.009: 99.4229% ( 4) 00:07:26.942 17241.009 - 17341.834: 99.4447% ( 4) 00:07:26.942 17341.834 - 17442.658: 99.4665% ( 4) 00:07:26.942 17442.658 - 17543.483: 99.4882% ( 4) 00:07:26.942 17543.483 - 17644.308: 99.5100% ( 4) 00:07:26.942 17644.308 - 17745.132: 99.5372% ( 5) 00:07:26.943 17745.132 - 17845.957: 99.5590% ( 4) 00:07:26.943 17845.957 - 17946.782: 99.5808% ( 4) 00:07:26.943 17946.782 - 18047.606: 99.6026% ( 4) 00:07:26.943 18047.606 - 18148.431: 99.6243% ( 4) 00:07:26.943 18148.431 - 18249.255: 99.6516% ( 5) 00:07:26.943 21072.345 - 21173.169: 99.6625% ( 2) 00:07:26.943 21173.169 - 21273.994: 99.6842% ( 4) 00:07:26.943 21273.994 - 21374.818: 99.7115% ( 5) 00:07:26.943 21374.818 - 21475.643: 99.7332% ( 4) 00:07:26.943 21475.643 - 21576.468: 99.7496% ( 3) 00:07:26.943 21576.468 - 21677.292: 99.7713% ( 4) 00:07:26.943 21677.292 - 21778.117: 99.7931% ( 4) 00:07:26.943 21778.117 - 21878.942: 99.8149% ( 4) 00:07:26.943 21878.942 - 21979.766: 99.8367% ( 4) 00:07:26.943 21979.766 - 22080.591: 99.8639% ( 5) 00:07:26.943 22080.591 - 22181.415: 99.8857% ( 4) 00:07:26.943 22181.415 - 22282.240: 99.9074% ( 4) 00:07:26.943 22282.240 - 22383.065: 99.9292% ( 4) 00:07:26.943 22383.065 - 22483.889: 99.9456% ( 3) 00:07:26.943 22483.889 - 22584.714: 99.9673% ( 4) 00:07:26.943 22584.714 - 22685.538: 99.9946% ( 5) 00:07:26.943 22685.538 - 22786.363: 100.0000% ( 1) 00:07:26.943 00:07:26.943 Latency histogram for PCIE (0000:00:12.0) NSID 3 from core 0: 00:07:26.943 ============================================================================== 00:07:26.943 Range in us Cumulative IO count 00:07:26.943 5772.209 - 5797.415: 0.0054% ( 1) 00:07:26.943 5822.622 - 5847.828: 0.0163% ( 2) 00:07:26.943 5847.828 - 5873.034: 0.0271% ( 2) 00:07:26.943 5873.034 - 5898.240: 0.0380% ( 2) 00:07:26.943 5898.240 - 5923.446: 0.0488% ( 2) 00:07:26.943 5923.446 - 5948.652: 0.0597% ( 2) 00:07:26.943 5948.652 - 5973.858: 0.0651% ( 1) 00:07:26.943 5973.858 - 5999.065: 0.0922% ( 5) 00:07:26.943 5999.065 - 6024.271: 0.1248% ( 6) 00:07:26.943 6024.271 - 6049.477: 0.1953% ( 13) 00:07:26.943 6049.477 - 6074.683: 0.2821% ( 16) 00:07:26.943 6074.683 - 6099.889: 0.3906% ( 20) 00:07:26.943 6099.889 - 6125.095: 0.5100% ( 22) 00:07:26.943 6125.095 - 6150.302: 0.6782% ( 31) 00:07:26.943 6150.302 - 6175.508: 0.8518% ( 32) 00:07:26.943 6175.508 - 6200.714: 1.0905% ( 44) 00:07:26.943 6200.714 - 6225.920: 1.4811% ( 72) 00:07:26.943 6225.920 - 6251.126: 2.0128% ( 98) 00:07:26.943 6251.126 - 6276.332: 2.8863% ( 161) 00:07:26.943 6276.332 - 6301.538: 3.7652% ( 162) 00:07:26.943 6301.538 - 6326.745: 4.9262% ( 214) 00:07:26.943 6326.745 - 6351.951: 6.4453% ( 280) 00:07:26.943 6351.951 - 6377.157: 8.3713% ( 355) 00:07:26.943 6377.157 - 6402.363: 10.6717% ( 424) 00:07:26.943 6402.363 - 6427.569: 13.2704% ( 479) 00:07:26.943 6427.569 - 6452.775: 16.3628% ( 570) 00:07:26.943 6452.775 - 6503.188: 24.4466% ( 1490) 00:07:26.943 6503.188 - 6553.600: 33.7511% ( 1715) 00:07:26.943 6553.600 - 6604.012: 43.0556% ( 1715) 00:07:26.943 6604.012 - 6654.425: 50.9766% ( 1460) 00:07:26.943 6654.425 - 6704.837: 58.0024% ( 1295) 00:07:26.943 6704.837 - 6755.249: 63.9323% ( 1093) 00:07:26.943 6755.249 - 6805.662: 69.0538% ( 944) 00:07:26.943 6805.662 - 6856.074: 72.9980% ( 727) 00:07:26.943 6856.074 - 6906.486: 75.4883% ( 459) 00:07:26.943 6906.486 - 6956.898: 77.5879% ( 387) 00:07:26.943 6956.898 - 7007.311: 79.3294% ( 321) 00:07:26.943 7007.311 - 7057.723: 80.9299% ( 295) 00:07:26.943 7057.723 - 7108.135: 81.9336% ( 185) 00:07:26.943 7108.135 - 7158.548: 83.0404% ( 204) 00:07:26.943 7158.548 - 7208.960: 83.7836% ( 137) 00:07:26.943 7208.960 - 7259.372: 84.5378% ( 139) 00:07:26.943 7259.372 - 7309.785: 85.4655% ( 171) 00:07:26.943 7309.785 - 7360.197: 86.1111% ( 119) 00:07:26.943 7360.197 - 7410.609: 86.5560% ( 82) 00:07:26.943 7410.609 - 7461.022: 87.1365% ( 107) 00:07:26.943 7461.022 - 7511.434: 87.9123% ( 143) 00:07:26.943 7511.434 - 7561.846: 88.5037% ( 109) 00:07:26.943 7561.846 - 7612.258: 89.0408% ( 99) 00:07:26.943 7612.258 - 7662.671: 89.5616% ( 96) 00:07:26.943 7662.671 - 7713.083: 90.1747% ( 113) 00:07:26.943 7713.083 - 7763.495: 90.6630% ( 90) 00:07:26.943 7763.495 - 7813.908: 90.9505% ( 53) 00:07:26.943 7813.908 - 7864.320: 91.2760% ( 60) 00:07:26.943 7864.320 - 7914.732: 91.7209% ( 82) 00:07:26.943 7914.732 - 7965.145: 92.1332% ( 76) 00:07:26.943 7965.145 - 8015.557: 92.5076% ( 69) 00:07:26.943 8015.557 - 8065.969: 92.8711% ( 67) 00:07:26.943 8065.969 - 8116.382: 93.1912% ( 59) 00:07:26.943 8116.382 - 8166.794: 93.6361% ( 82) 00:07:26.943 8166.794 - 8217.206: 94.0484% ( 76) 00:07:26.943 8217.206 - 8267.618: 94.4282% ( 70) 00:07:26.943 8267.618 - 8318.031: 94.7591% ( 61) 00:07:26.943 8318.031 - 8368.443: 95.1009% ( 63) 00:07:26.943 8368.443 - 8418.855: 95.3993% ( 55) 00:07:26.943 8418.855 - 8469.268: 95.6217% ( 41) 00:07:26.943 8469.268 - 8519.680: 95.8279% ( 38) 00:07:26.943 8519.680 - 8570.092: 96.1317% ( 56) 00:07:26.943 8570.092 - 8620.505: 96.4138% ( 52) 00:07:26.943 8620.505 - 8670.917: 96.6200% ( 38) 00:07:26.943 8670.917 - 8721.329: 96.8696% ( 46) 00:07:26.943 8721.329 - 8771.742: 97.1625% ( 54) 00:07:26.943 8771.742 - 8822.154: 97.4013% ( 44) 00:07:26.943 8822.154 - 8872.566: 97.5532% ( 28) 00:07:26.943 8872.566 - 8922.978: 97.6888% ( 25) 00:07:26.943 8922.978 - 8973.391: 97.8190% ( 24) 00:07:26.943 8973.391 - 9023.803: 97.9058% ( 16) 00:07:26.943 9023.803 - 9074.215: 98.0089% ( 19) 00:07:26.943 9074.215 - 9124.628: 98.2042% ( 36) 00:07:26.943 9124.628 - 9175.040: 98.2747% ( 13) 00:07:26.943 9175.040 - 9225.452: 98.3290% ( 10) 00:07:26.943 9225.452 - 9275.865: 98.3887% ( 11) 00:07:26.943 9275.865 - 9326.277: 98.4538% ( 12) 00:07:26.943 9326.277 - 9376.689: 98.4972% ( 8) 00:07:26.943 9376.689 - 9427.102: 98.5514% ( 10) 00:07:26.943 9427.102 - 9477.514: 98.5677% ( 3) 00:07:26.943 9477.514 - 9527.926: 98.5786% ( 2) 00:07:26.943 9527.926 - 9578.338: 98.5894% ( 2) 00:07:26.943 9578.338 - 9628.751: 98.5948% ( 1) 00:07:26.943 9628.751 - 9679.163: 98.6057% ( 2) 00:07:26.943 9679.163 - 9729.575: 98.6111% ( 1) 00:07:26.943 11191.532 - 11241.945: 98.6220% ( 2) 00:07:26.943 11241.945 - 11292.357: 98.6328% ( 2) 00:07:26.943 11292.357 - 11342.769: 98.6437% ( 2) 00:07:26.943 11342.769 - 11393.182: 98.6545% ( 2) 00:07:26.943 11393.182 - 11443.594: 98.6708% ( 3) 00:07:26.943 11443.594 - 11494.006: 98.6816% ( 2) 00:07:26.943 11494.006 - 11544.418: 98.6925% ( 2) 00:07:26.943 11544.418 - 11594.831: 98.7033% ( 2) 00:07:26.943 11594.831 - 11645.243: 98.7142% ( 2) 00:07:26.943 11645.243 - 11695.655: 98.7250% ( 2) 00:07:26.943 11695.655 - 11746.068: 98.7305% ( 1) 00:07:26.943 11746.068 - 11796.480: 98.7413% ( 2) 00:07:26.943 11796.480 - 11846.892: 98.7522% ( 2) 00:07:26.943 11846.892 - 11897.305: 98.7684% ( 3) 00:07:26.943 11897.305 - 11947.717: 98.7793% ( 2) 00:07:26.943 11947.717 - 11998.129: 98.8064% ( 5) 00:07:26.943 11998.129 - 12048.542: 98.8336% ( 5) 00:07:26.943 12048.542 - 12098.954: 98.8553% ( 4) 00:07:26.943 12098.954 - 12149.366: 98.8770% ( 4) 00:07:26.943 12149.366 - 12199.778: 98.8932% ( 3) 00:07:26.943 12199.778 - 12250.191: 98.9041% ( 2) 00:07:26.943 12250.191 - 12300.603: 98.9258% ( 4) 00:07:26.943 12300.603 - 12351.015: 98.9638% ( 7) 00:07:26.943 12351.015 - 12401.428: 99.0017% ( 7) 00:07:26.943 12401.428 - 12451.840: 99.0560% ( 10) 00:07:26.943 12451.840 - 12502.252: 99.1211% ( 12) 00:07:26.943 12502.252 - 12552.665: 99.1862% ( 12) 00:07:26.943 12552.665 - 12603.077: 99.2567% ( 13) 00:07:26.943 12603.077 - 12653.489: 99.3110% ( 10) 00:07:26.943 12653.489 - 12703.902: 99.3707% ( 11) 00:07:26.943 12703.902 - 12754.314: 99.4249% ( 10) 00:07:26.943 12754.314 - 12804.726: 99.4629% ( 7) 00:07:26.943 12804.726 - 12855.138: 99.5171% ( 10) 00:07:26.943 12855.138 - 12905.551: 99.5497% ( 6) 00:07:26.943 12905.551 - 13006.375: 99.5985% ( 9) 00:07:26.943 13006.375 - 13107.200: 99.6148% ( 3) 00:07:26.943 13107.200 - 13208.025: 99.6365% ( 4) 00:07:26.943 13208.025 - 13308.849: 99.6528% ( 3) 00:07:26.943 16031.114 - 16131.938: 99.6582% ( 1) 00:07:26.943 16131.938 - 16232.763: 99.6799% ( 4) 00:07:26.943 16232.763 - 16333.588: 99.7016% ( 4) 00:07:26.943 16333.588 - 16434.412: 99.7233% ( 4) 00:07:26.943 16434.412 - 16535.237: 99.7450% ( 4) 00:07:26.943 16535.237 - 16636.062: 99.7667% ( 4) 00:07:26.943 16636.062 - 16736.886: 99.7938% ( 5) 00:07:26.943 16736.886 - 16837.711: 99.8155% ( 4) 00:07:26.943 16837.711 - 16938.535: 99.8372% ( 4) 00:07:26.943 16938.535 - 17039.360: 99.8589% ( 4) 00:07:26.943 17039.360 - 17140.185: 99.8806% ( 4) 00:07:26.943 17140.185 - 17241.009: 99.9078% ( 5) 00:07:26.943 17241.009 - 17341.834: 99.9295% ( 4) 00:07:26.943 17341.834 - 17442.658: 99.9512% ( 4) 00:07:26.943 17442.658 - 17543.483: 99.9729% ( 4) 00:07:26.943 17543.483 - 17644.308: 100.0000% ( 5) 00:07:26.943 00:07:26.943 02:02:51 nvme.nvme_perf -- nvme/nvme.sh@24 -- # '[' -b /dev/ram0 ']' 00:07:26.943 00:07:26.943 real 0m2.546s 00:07:26.943 user 0m2.204s 00:07:26.943 sys 0m0.221s 00:07:26.943 02:02:51 nvme.nvme_perf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.943 02:02:51 nvme.nvme_perf -- common/autotest_common.sh@10 -- # set +x 00:07:26.943 ************************************ 00:07:26.943 END TEST nvme_perf 00:07:26.943 ************************************ 00:07:26.943 02:02:51 nvme -- nvme/nvme.sh@87 -- # run_test nvme_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:26.943 02:02:51 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:07:26.943 02:02:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.943 02:02:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.943 ************************************ 00:07:26.943 START TEST nvme_hello_world 00:07:26.943 ************************************ 00:07:26.943 02:02:51 nvme.nvme_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_world -i 0 00:07:26.943 Initializing NVMe Controllers 00:07:26.943 Attached to 0000:00:10.0 00:07:26.943 Namespace ID: 1 size: 6GB 00:07:26.943 Attached to 0000:00:11.0 00:07:26.943 Namespace ID: 1 size: 5GB 00:07:26.943 Attached to 0000:00:13.0 00:07:26.943 Namespace ID: 1 size: 1GB 00:07:26.943 Attached to 0000:00:12.0 00:07:26.943 Namespace ID: 1 size: 4GB 00:07:26.943 Namespace ID: 2 size: 4GB 00:07:26.943 Namespace ID: 3 size: 4GB 00:07:26.943 Initialization complete. 00:07:26.943 INFO: using host memory buffer for IO 00:07:26.943 Hello world! 00:07:26.943 INFO: using host memory buffer for IO 00:07:26.943 Hello world! 00:07:26.943 INFO: using host memory buffer for IO 00:07:26.943 Hello world! 00:07:26.943 INFO: using host memory buffer for IO 00:07:26.943 Hello world! 00:07:26.943 INFO: using host memory buffer for IO 00:07:26.943 Hello world! 00:07:26.943 INFO: using host memory buffer for IO 00:07:26.943 Hello world! 00:07:26.943 00:07:26.943 real 0m0.223s 00:07:26.943 user 0m0.083s 00:07:26.943 sys 0m0.092s 00:07:26.943 02:02:51 nvme.nvme_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:26.943 02:02:51 nvme.nvme_hello_world -- common/autotest_common.sh@10 -- # set +x 00:07:26.943 ************************************ 00:07:26.943 END TEST nvme_hello_world 00:07:26.943 ************************************ 00:07:26.943 02:02:51 nvme -- nvme/nvme.sh@88 -- # run_test nvme_sgl /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:26.943 02:02:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:26.943 02:02:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:26.943 02:02:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:26.943 ************************************ 00:07:26.943 START TEST nvme_sgl 00:07:26.943 ************************************ 00:07:26.943 02:02:51 nvme.nvme_sgl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sgl/sgl 00:07:27.242 0000:00:10.0: build_io_request_0 Invalid IO length parameter 00:07:27.242 0000:00:10.0: build_io_request_1 Invalid IO length parameter 00:07:27.243 0000:00:10.0: build_io_request_3 Invalid IO length parameter 00:07:27.243 0000:00:10.0: build_io_request_8 Invalid IO length parameter 00:07:27.243 0000:00:10.0: build_io_request_9 Invalid IO length parameter 00:07:27.243 0000:00:10.0: build_io_request_11 Invalid IO length parameter 00:07:27.243 0000:00:11.0: build_io_request_0 Invalid IO length parameter 00:07:27.243 0000:00:11.0: build_io_request_1 Invalid IO length parameter 00:07:27.243 0000:00:11.0: build_io_request_3 Invalid IO length parameter 00:07:27.243 0000:00:11.0: build_io_request_8 Invalid IO length parameter 00:07:27.243 0000:00:11.0: build_io_request_9 Invalid IO length parameter 00:07:27.243 0000:00:11.0: build_io_request_11 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_0 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_1 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_2 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_3 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_4 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_5 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_6 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_7 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_8 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_9 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_10 Invalid IO length parameter 00:07:27.243 0000:00:13.0: build_io_request_11 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_0 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_1 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_2 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_3 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_4 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_5 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_6 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_7 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_8 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_9 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_10 Invalid IO length parameter 00:07:27.243 0000:00:12.0: build_io_request_11 Invalid IO length parameter 00:07:27.243 NVMe Readv/Writev Request test 00:07:27.243 Attached to 0000:00:10.0 00:07:27.243 Attached to 0000:00:11.0 00:07:27.243 Attached to 0000:00:13.0 00:07:27.243 Attached to 0000:00:12.0 00:07:27.243 0000:00:10.0: build_io_request_2 test passed 00:07:27.243 0000:00:10.0: build_io_request_4 test passed 00:07:27.243 0000:00:10.0: build_io_request_5 test passed 00:07:27.243 0000:00:10.0: build_io_request_6 test passed 00:07:27.243 0000:00:10.0: build_io_request_7 test passed 00:07:27.243 0000:00:10.0: build_io_request_10 test passed 00:07:27.243 0000:00:11.0: build_io_request_2 test passed 00:07:27.243 0000:00:11.0: build_io_request_4 test passed 00:07:27.243 0000:00:11.0: build_io_request_5 test passed 00:07:27.243 0000:00:11.0: build_io_request_6 test passed 00:07:27.243 0000:00:11.0: build_io_request_7 test passed 00:07:27.243 0000:00:11.0: build_io_request_10 test passed 00:07:27.243 Cleaning up... 00:07:27.243 00:07:27.243 real 0m0.278s 00:07:27.243 user 0m0.140s 00:07:27.243 sys 0m0.094s 00:07:27.243 02:02:51 nvme.nvme_sgl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.243 02:02:51 nvme.nvme_sgl -- common/autotest_common.sh@10 -- # set +x 00:07:27.243 ************************************ 00:07:27.243 END TEST nvme_sgl 00:07:27.243 ************************************ 00:07:27.243 02:02:51 nvme -- nvme/nvme.sh@89 -- # run_test nvme_e2edp /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:27.243 02:02:51 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.243 02:02:51 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.243 02:02:51 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.530 ************************************ 00:07:27.530 START TEST nvme_e2edp 00:07:27.530 ************************************ 00:07:27.530 02:02:51 nvme.nvme_e2edp -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/e2edp/nvme_dp 00:07:27.530 NVMe Write/Read with End-to-End data protection test 00:07:27.530 Attached to 0000:00:10.0 00:07:27.530 Attached to 0000:00:11.0 00:07:27.530 Attached to 0000:00:13.0 00:07:27.530 Attached to 0000:00:12.0 00:07:27.530 Cleaning up... 00:07:27.530 00:07:27.530 real 0m0.206s 00:07:27.530 user 0m0.067s 00:07:27.530 sys 0m0.095s 00:07:27.530 02:02:52 nvme.nvme_e2edp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.530 ************************************ 00:07:27.530 END TEST nvme_e2edp 00:07:27.530 02:02:52 nvme.nvme_e2edp -- common/autotest_common.sh@10 -- # set +x 00:07:27.530 ************************************ 00:07:27.530 02:02:52 nvme -- nvme/nvme.sh@90 -- # run_test nvme_reserve /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:27.530 02:02:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.530 02:02:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.530 02:02:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.530 ************************************ 00:07:27.530 START TEST nvme_reserve 00:07:27.530 ************************************ 00:07:27.530 02:02:52 nvme.nvme_reserve -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/reserve/reserve 00:07:27.863 ===================================================== 00:07:27.863 NVMe Controller at PCI bus 0, device 16, function 0 00:07:27.863 ===================================================== 00:07:27.863 Reservations: Not Supported 00:07:27.863 ===================================================== 00:07:27.863 NVMe Controller at PCI bus 0, device 17, function 0 00:07:27.863 ===================================================== 00:07:27.863 Reservations: Not Supported 00:07:27.863 ===================================================== 00:07:27.863 NVMe Controller at PCI bus 0, device 19, function 0 00:07:27.863 ===================================================== 00:07:27.863 Reservations: Not Supported 00:07:27.863 ===================================================== 00:07:27.863 NVMe Controller at PCI bus 0, device 18, function 0 00:07:27.863 ===================================================== 00:07:27.863 Reservations: Not Supported 00:07:27.863 Reservation test passed 00:07:27.863 00:07:27.863 real 0m0.209s 00:07:27.863 user 0m0.070s 00:07:27.863 sys 0m0.096s 00:07:27.863 02:02:52 nvme.nvme_reserve -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:27.863 ************************************ 00:07:27.863 END TEST nvme_reserve 00:07:27.863 ************************************ 00:07:27.863 02:02:52 nvme.nvme_reserve -- common/autotest_common.sh@10 -- # set +x 00:07:27.863 02:02:52 nvme -- nvme/nvme.sh@91 -- # run_test nvme_err_injection /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:27.863 02:02:52 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:27.863 02:02:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:27.863 02:02:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:27.863 ************************************ 00:07:27.863 START TEST nvme_err_injection 00:07:27.863 ************************************ 00:07:27.863 02:02:52 nvme.nvme_err_injection -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/err_injection/err_injection 00:07:28.121 NVMe Error Injection test 00:07:28.121 Attached to 0000:00:10.0 00:07:28.121 Attached to 0000:00:11.0 00:07:28.121 Attached to 0000:00:13.0 00:07:28.121 Attached to 0000:00:12.0 00:07:28.121 0000:00:10.0: get features failed as expected 00:07:28.121 0000:00:11.0: get features failed as expected 00:07:28.121 0000:00:13.0: get features failed as expected 00:07:28.121 0000:00:12.0: get features failed as expected 00:07:28.121 0000:00:10.0: get features successfully as expected 00:07:28.121 0000:00:11.0: get features successfully as expected 00:07:28.121 0000:00:13.0: get features successfully as expected 00:07:28.121 0000:00:12.0: get features successfully as expected 00:07:28.121 0000:00:10.0: read failed as expected 00:07:28.121 0000:00:11.0: read failed as expected 00:07:28.121 0000:00:13.0: read failed as expected 00:07:28.121 0000:00:12.0: read failed as expected 00:07:28.121 0000:00:10.0: read successfully as expected 00:07:28.122 0000:00:11.0: read successfully as expected 00:07:28.122 0000:00:13.0: read successfully as expected 00:07:28.122 0000:00:12.0: read successfully as expected 00:07:28.122 Cleaning up... 00:07:28.122 00:07:28.122 real 0m0.219s 00:07:28.122 user 0m0.079s 00:07:28.122 sys 0m0.096s 00:07:28.122 02:02:52 nvme.nvme_err_injection -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:28.122 02:02:52 nvme.nvme_err_injection -- common/autotest_common.sh@10 -- # set +x 00:07:28.122 ************************************ 00:07:28.122 END TEST nvme_err_injection 00:07:28.122 ************************************ 00:07:28.122 02:02:52 nvme -- nvme/nvme.sh@92 -- # run_test nvme_overhead /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:28.122 02:02:52 nvme -- common/autotest_common.sh@1105 -- # '[' 9 -le 1 ']' 00:07:28.122 02:02:52 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:28.122 02:02:52 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:28.122 ************************************ 00:07:28.122 START TEST nvme_overhead 00:07:28.122 ************************************ 00:07:28.122 02:02:52 nvme.nvme_overhead -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/overhead/overhead -o 4096 -t 1 -H -i 0 00:07:29.497 Initializing NVMe Controllers 00:07:29.497 Attached to 0000:00:10.0 00:07:29.497 Attached to 0000:00:11.0 00:07:29.497 Attached to 0000:00:13.0 00:07:29.497 Attached to 0000:00:12.0 00:07:29.497 Initialization complete. Launching workers. 00:07:29.497 submit (in ns) avg, min, max = 11422.5, 10099.2, 60283.1 00:07:29.497 complete (in ns) avg, min, max = 7668.8, 7258.5, 131065.4 00:07:29.497 00:07:29.497 Submit histogram 00:07:29.497 ================ 00:07:29.497 Range in us Cumulative Count 00:07:29.497 10.092 - 10.142: 0.0055% ( 1) 00:07:29.497 10.338 - 10.388: 0.0166% ( 2) 00:07:29.497 10.388 - 10.437: 0.0222% ( 1) 00:07:29.497 10.437 - 10.486: 0.0277% ( 1) 00:07:29.497 10.782 - 10.831: 0.0333% ( 1) 00:07:29.497 10.831 - 10.880: 0.2051% ( 31) 00:07:29.497 10.880 - 10.929: 1.0867% ( 159) 00:07:29.497 10.929 - 10.978: 3.9310% ( 513) 00:07:29.497 10.978 - 11.028: 10.5678% ( 1197) 00:07:29.497 11.028 - 11.077: 22.1668% ( 2092) 00:07:29.497 11.077 - 11.126: 37.3586% ( 2740) 00:07:29.497 11.126 - 11.175: 52.6613% ( 2760) 00:07:29.497 11.175 - 11.225: 66.3728% ( 2473) 00:07:29.497 11.225 - 11.274: 75.8816% ( 1715) 00:07:29.497 11.274 - 11.323: 81.5037% ( 1014) 00:07:29.497 11.323 - 11.372: 84.0763% ( 464) 00:07:29.497 11.372 - 11.422: 85.4735% ( 252) 00:07:29.497 11.422 - 11.471: 86.0668% ( 107) 00:07:29.497 11.471 - 11.520: 86.4438% ( 68) 00:07:29.497 11.520 - 11.569: 86.7432% ( 54) 00:07:29.497 11.569 - 11.618: 86.9760% ( 42) 00:07:29.497 11.618 - 11.668: 87.1812% ( 37) 00:07:29.497 11.668 - 11.717: 87.3642% ( 33) 00:07:29.497 11.717 - 11.766: 87.5028% ( 25) 00:07:29.497 11.766 - 11.815: 87.6469% ( 26) 00:07:29.497 11.815 - 11.865: 87.9131% ( 48) 00:07:29.497 11.865 - 11.914: 88.2291% ( 57) 00:07:29.497 11.914 - 11.963: 88.7059% ( 86) 00:07:29.497 11.963 - 12.012: 89.3435% ( 115) 00:07:29.497 12.012 - 12.062: 90.0477% ( 127) 00:07:29.497 12.062 - 12.111: 90.9182% ( 157) 00:07:29.497 12.111 - 12.160: 92.0104% ( 197) 00:07:29.497 12.160 - 12.209: 93.1692% ( 209) 00:07:29.497 12.209 - 12.258: 94.2393% ( 193) 00:07:29.497 12.258 - 12.308: 95.1652% ( 167) 00:07:29.497 12.308 - 12.357: 95.9137% ( 135) 00:07:29.497 12.357 - 12.406: 96.4183% ( 91) 00:07:29.497 12.406 - 12.455: 96.7399% ( 58) 00:07:29.497 12.455 - 12.505: 96.9062% ( 30) 00:07:29.497 12.505 - 12.554: 97.0171% ( 20) 00:07:29.497 12.554 - 12.603: 97.0947% ( 14) 00:07:29.497 12.603 - 12.702: 97.1723% ( 14) 00:07:29.497 12.702 - 12.800: 97.2000% ( 5) 00:07:29.497 12.800 - 12.898: 97.2278% ( 5) 00:07:29.497 12.898 - 12.997: 97.3109% ( 15) 00:07:29.497 12.997 - 13.095: 97.3996% ( 16) 00:07:29.497 13.095 - 13.194: 97.5050% ( 19) 00:07:29.497 13.194 - 13.292: 97.6381% ( 24) 00:07:29.497 13.292 - 13.391: 97.7046% ( 12) 00:07:29.497 13.391 - 13.489: 97.7933% ( 16) 00:07:29.497 13.489 - 13.588: 97.8543% ( 11) 00:07:29.497 13.588 - 13.686: 97.9208% ( 12) 00:07:29.497 13.686 - 13.785: 97.9485% ( 5) 00:07:29.497 13.785 - 13.883: 97.9707% ( 4) 00:07:29.497 13.883 - 13.982: 97.9874% ( 3) 00:07:29.497 13.982 - 14.080: 98.0206% ( 6) 00:07:29.497 14.080 - 14.178: 98.0539% ( 6) 00:07:29.497 14.178 - 14.277: 98.0982% ( 8) 00:07:29.497 14.277 - 14.375: 98.1260% ( 5) 00:07:29.497 14.375 - 14.474: 98.1925% ( 12) 00:07:29.497 14.474 - 14.572: 98.2091% ( 3) 00:07:29.497 14.572 - 14.671: 98.2535% ( 8) 00:07:29.497 14.671 - 14.769: 98.2868% ( 6) 00:07:29.497 14.769 - 14.868: 98.3200% ( 6) 00:07:29.497 14.868 - 14.966: 98.3477% ( 5) 00:07:29.497 14.966 - 15.065: 98.3810% ( 6) 00:07:29.497 15.065 - 15.163: 98.4143% ( 6) 00:07:29.497 15.163 - 15.262: 98.4198% ( 1) 00:07:29.497 15.262 - 15.360: 98.4309% ( 2) 00:07:29.497 15.360 - 15.458: 98.4475% ( 3) 00:07:29.497 15.458 - 15.557: 98.4697% ( 4) 00:07:29.497 15.557 - 15.655: 98.4974% ( 5) 00:07:29.497 15.754 - 15.852: 98.5307% ( 6) 00:07:29.497 15.852 - 15.951: 98.5418% ( 2) 00:07:29.497 15.951 - 16.049: 98.5473% ( 1) 00:07:29.497 16.049 - 16.148: 98.5529% ( 1) 00:07:29.497 16.246 - 16.345: 98.5806% ( 5) 00:07:29.497 16.345 - 16.443: 98.5972% ( 3) 00:07:29.497 16.443 - 16.542: 98.6361% ( 7) 00:07:29.497 16.542 - 16.640: 98.7081% ( 13) 00:07:29.497 16.640 - 16.738: 98.7802% ( 13) 00:07:29.497 16.738 - 16.837: 98.8357% ( 10) 00:07:29.497 16.837 - 16.935: 98.8856% ( 9) 00:07:29.497 16.935 - 17.034: 98.9299% ( 8) 00:07:29.497 17.034 - 17.132: 99.0131% ( 15) 00:07:29.497 17.132 - 17.231: 99.0741% ( 11) 00:07:29.497 17.231 - 17.329: 99.1351% ( 11) 00:07:29.497 17.329 - 17.428: 99.1905% ( 10) 00:07:29.498 17.428 - 17.526: 99.2460% ( 10) 00:07:29.498 17.526 - 17.625: 99.2792% ( 6) 00:07:29.498 17.625 - 17.723: 99.3180% ( 7) 00:07:29.498 17.723 - 17.822: 99.3624% ( 8) 00:07:29.498 17.822 - 17.920: 99.4067% ( 8) 00:07:29.498 17.920 - 18.018: 99.4234% ( 3) 00:07:29.498 18.018 - 18.117: 99.4622% ( 7) 00:07:29.498 18.117 - 18.215: 99.4844% ( 4) 00:07:29.498 18.215 - 18.314: 99.5065% ( 4) 00:07:29.498 18.314 - 18.412: 99.5176% ( 2) 00:07:29.498 18.412 - 18.511: 99.5232% ( 1) 00:07:29.498 18.511 - 18.609: 99.5398% ( 3) 00:07:29.498 18.609 - 18.708: 99.5454% ( 1) 00:07:29.498 18.708 - 18.806: 99.5509% ( 1) 00:07:29.498 18.806 - 18.905: 99.5564% ( 1) 00:07:29.498 19.003 - 19.102: 99.5620% ( 1) 00:07:29.498 19.397 - 19.495: 99.5675% ( 1) 00:07:29.498 19.495 - 19.594: 99.5731% ( 1) 00:07:29.498 19.594 - 19.692: 99.5842% ( 2) 00:07:29.498 19.889 - 19.988: 99.5953% ( 2) 00:07:29.498 19.988 - 20.086: 99.6008% ( 1) 00:07:29.498 20.283 - 20.382: 99.6063% ( 1) 00:07:29.498 20.382 - 20.480: 99.6119% ( 1) 00:07:29.498 20.480 - 20.578: 99.6174% ( 1) 00:07:29.498 20.578 - 20.677: 99.6230% ( 1) 00:07:29.498 20.775 - 20.874: 99.6285% ( 1) 00:07:29.498 20.874 - 20.972: 99.6341% ( 1) 00:07:29.498 20.972 - 21.071: 99.6452% ( 2) 00:07:29.498 21.071 - 21.169: 99.6507% ( 1) 00:07:29.498 21.169 - 21.268: 99.6562% ( 1) 00:07:29.498 21.268 - 21.366: 99.6673% ( 2) 00:07:29.498 21.366 - 21.465: 99.6784% ( 2) 00:07:29.498 21.563 - 21.662: 99.6895% ( 2) 00:07:29.498 21.858 - 21.957: 99.7006% ( 2) 00:07:29.498 22.252 - 22.351: 99.7117% ( 2) 00:07:29.498 22.548 - 22.646: 99.7228% ( 2) 00:07:29.498 22.646 - 22.745: 99.7283% ( 1) 00:07:29.498 22.745 - 22.843: 99.7339% ( 1) 00:07:29.498 22.843 - 22.942: 99.7394% ( 1) 00:07:29.498 22.942 - 23.040: 99.7450% ( 1) 00:07:29.498 23.138 - 23.237: 99.7505% ( 1) 00:07:29.498 23.237 - 23.335: 99.7560% ( 1) 00:07:29.498 23.434 - 23.532: 99.7616% ( 1) 00:07:29.498 23.631 - 23.729: 99.7727% ( 2) 00:07:29.498 23.828 - 23.926: 99.7782% ( 1) 00:07:29.498 24.025 - 24.123: 99.7838% ( 1) 00:07:29.498 24.123 - 24.222: 99.7893% ( 1) 00:07:29.498 24.418 - 24.517: 99.7949% ( 1) 00:07:29.498 25.009 - 25.108: 99.8004% ( 1) 00:07:29.498 25.206 - 25.403: 99.8059% ( 1) 00:07:29.498 25.403 - 25.600: 99.8115% ( 1) 00:07:29.498 25.797 - 25.994: 99.8170% ( 1) 00:07:29.498 25.994 - 26.191: 99.8226% ( 1) 00:07:29.498 26.191 - 26.388: 99.8337% ( 2) 00:07:29.498 26.388 - 26.585: 99.8392% ( 1) 00:07:29.498 26.782 - 26.978: 99.8558% ( 3) 00:07:29.498 27.175 - 27.372: 99.8669% ( 2) 00:07:29.498 27.372 - 27.569: 99.8725% ( 1) 00:07:29.498 27.766 - 27.963: 99.8780% ( 1) 00:07:29.498 28.160 - 28.357: 99.8836% ( 1) 00:07:29.498 28.554 - 28.751: 99.8891% ( 1) 00:07:29.498 28.751 - 28.948: 99.8947% ( 1) 00:07:29.498 29.342 - 29.538: 99.9057% ( 2) 00:07:29.498 30.326 - 30.523: 99.9113% ( 1) 00:07:29.498 30.917 - 31.114: 99.9224% ( 2) 00:07:29.498 32.098 - 32.295: 99.9279% ( 1) 00:07:29.498 32.492 - 32.689: 99.9335% ( 1) 00:07:29.498 35.840 - 36.037: 99.9390% ( 1) 00:07:29.498 36.037 - 36.234: 99.9446% ( 1) 00:07:29.498 36.825 - 37.022: 99.9556% ( 2) 00:07:29.498 38.794 - 38.991: 99.9612% ( 1) 00:07:29.498 39.188 - 39.385: 99.9667% ( 1) 00:07:29.498 39.778 - 39.975: 99.9723% ( 1) 00:07:29.498 43.914 - 44.111: 99.9778% ( 1) 00:07:29.498 45.489 - 45.686: 99.9834% ( 1) 00:07:29.498 52.382 - 52.775: 99.9889% ( 1) 00:07:29.498 59.471 - 59.865: 99.9945% ( 1) 00:07:29.498 60.258 - 60.652: 100.0000% ( 1) 00:07:29.498 00:07:29.498 Complete histogram 00:07:29.498 ================== 00:07:29.498 Range in us Cumulative Count 00:07:29.498 7.237 - 7.286: 0.0222% ( 4) 00:07:29.498 7.286 - 7.335: 0.3604% ( 61) 00:07:29.498 7.335 - 7.385: 3.5152% ( 569) 00:07:29.498 7.385 - 7.434: 15.3804% ( 2140) 00:07:29.498 7.434 - 7.483: 36.8485% ( 3872) 00:07:29.498 7.483 - 7.532: 60.0743% ( 4189) 00:07:29.498 7.532 - 7.582: 76.9905% ( 3051) 00:07:29.498 7.582 - 7.631: 87.1257% ( 1828) 00:07:29.498 7.631 - 7.680: 91.8108% ( 845) 00:07:29.498 7.680 - 7.729: 94.1617% ( 424) 00:07:29.498 7.729 - 7.778: 95.2207% ( 191) 00:07:29.498 7.778 - 7.828: 95.7640% ( 98) 00:07:29.498 7.828 - 7.877: 96.1189% ( 64) 00:07:29.498 7.877 - 7.926: 96.3407% ( 40) 00:07:29.498 7.926 - 7.975: 96.4626% ( 22) 00:07:29.498 7.975 - 8.025: 96.5624% ( 18) 00:07:29.498 8.025 - 8.074: 96.6401% ( 14) 00:07:29.498 8.074 - 8.123: 96.8507% ( 38) 00:07:29.498 8.123 - 8.172: 97.1058% ( 46) 00:07:29.498 8.172 - 8.222: 97.4773% ( 67) 00:07:29.498 8.222 - 8.271: 97.7434% ( 48) 00:07:29.498 8.271 - 8.320: 97.9485% ( 37) 00:07:29.498 8.320 - 8.369: 98.0428% ( 17) 00:07:29.498 8.369 - 8.418: 98.0982% ( 10) 00:07:29.498 8.418 - 8.468: 98.1426% ( 8) 00:07:29.498 8.468 - 8.517: 98.1925% ( 9) 00:07:29.498 8.517 - 8.566: 98.1980% ( 1) 00:07:29.498 8.566 - 8.615: 98.2202% ( 4) 00:07:29.498 8.615 - 8.665: 98.2369% ( 3) 00:07:29.498 8.714 - 8.763: 98.2424% ( 1) 00:07:29.498 8.812 - 8.862: 98.2479% ( 1) 00:07:29.498 8.911 - 8.960: 98.2535% ( 1) 00:07:29.498 9.157 - 9.206: 98.2590% ( 1) 00:07:29.498 9.206 - 9.255: 98.2646% ( 1) 00:07:29.498 9.354 - 9.403: 98.2701% ( 1) 00:07:29.498 9.649 - 9.698: 98.2757% ( 1) 00:07:29.498 9.698 - 9.748: 98.2812% ( 1) 00:07:29.498 9.797 - 9.846: 98.2868% ( 1) 00:07:29.498 9.846 - 9.895: 98.2923% ( 1) 00:07:29.498 10.043 - 10.092: 98.2978% ( 1) 00:07:29.498 10.092 - 10.142: 98.3034% ( 1) 00:07:29.498 10.191 - 10.240: 98.3145% ( 2) 00:07:29.498 10.240 - 10.289: 98.3200% ( 1) 00:07:29.498 10.289 - 10.338: 98.3256% ( 1) 00:07:29.498 10.338 - 10.388: 98.3367% ( 2) 00:07:29.498 10.437 - 10.486: 98.3422% ( 1) 00:07:29.498 10.486 - 10.535: 98.3533% ( 2) 00:07:29.498 10.535 - 10.585: 98.3588% ( 1) 00:07:29.498 10.585 - 10.634: 98.3644% ( 1) 00:07:29.498 10.634 - 10.683: 98.3699% ( 1) 00:07:29.498 10.782 - 10.831: 98.3755% ( 1) 00:07:29.498 10.831 - 10.880: 98.3866% ( 2) 00:07:29.498 10.880 - 10.929: 98.3976% ( 2) 00:07:29.498 10.929 - 10.978: 98.4032% ( 1) 00:07:29.498 10.978 - 11.028: 98.4087% ( 1) 00:07:29.498 11.077 - 11.126: 98.4143% ( 1) 00:07:29.498 11.175 - 11.225: 98.4254% ( 2) 00:07:29.498 11.372 - 11.422: 98.4309% ( 1) 00:07:29.498 11.422 - 11.471: 98.4365% ( 1) 00:07:29.498 11.766 - 11.815: 98.4420% ( 1) 00:07:29.498 12.012 - 12.062: 98.4475% ( 1) 00:07:29.498 12.406 - 12.455: 98.4531% ( 1) 00:07:29.498 12.505 - 12.554: 98.4642% ( 2) 00:07:29.498 12.603 - 12.702: 98.4697% ( 1) 00:07:29.498 12.702 - 12.800: 98.4864% ( 3) 00:07:29.498 12.800 - 12.898: 98.5196% ( 6) 00:07:29.498 12.898 - 12.997: 98.5640% ( 8) 00:07:29.498 12.997 - 13.095: 98.6139% ( 9) 00:07:29.498 13.095 - 13.194: 98.7248% ( 20) 00:07:29.498 13.194 - 13.292: 98.7636% ( 7) 00:07:29.498 13.292 - 13.391: 98.8689% ( 19) 00:07:29.498 13.391 - 13.489: 98.9576% ( 16) 00:07:29.498 13.489 - 13.588: 99.0464% ( 16) 00:07:29.498 13.588 - 13.686: 99.1129% ( 12) 00:07:29.498 13.686 - 13.785: 99.1794% ( 12) 00:07:29.498 13.785 - 13.883: 99.2238% ( 8) 00:07:29.498 13.883 - 13.982: 99.2848% ( 11) 00:07:29.498 13.982 - 14.080: 99.3069% ( 4) 00:07:29.498 14.080 - 14.178: 99.3568% ( 9) 00:07:29.498 14.178 - 14.277: 99.3846% ( 5) 00:07:29.499 14.277 - 14.375: 99.3957% ( 2) 00:07:29.499 14.375 - 14.474: 99.4234% ( 5) 00:07:29.499 14.474 - 14.572: 99.4400% ( 3) 00:07:29.499 14.671 - 14.769: 99.4622% ( 4) 00:07:29.499 14.769 - 14.868: 99.4788% ( 3) 00:07:29.499 14.868 - 14.966: 99.5065% ( 5) 00:07:29.499 14.966 - 15.065: 99.5121% ( 1) 00:07:29.499 15.163 - 15.262: 99.5176% ( 1) 00:07:29.499 15.360 - 15.458: 99.5232% ( 1) 00:07:29.499 15.458 - 15.557: 99.5454% ( 4) 00:07:29.499 15.754 - 15.852: 99.5564% ( 2) 00:07:29.499 15.852 - 15.951: 99.5786% ( 4) 00:07:29.499 15.951 - 16.049: 99.5953% ( 3) 00:07:29.499 16.148 - 16.246: 99.6008% ( 1) 00:07:29.499 16.345 - 16.443: 99.6063% ( 1) 00:07:29.499 16.443 - 16.542: 99.6119% ( 1) 00:07:29.499 16.738 - 16.837: 99.6230% ( 2) 00:07:29.499 16.837 - 16.935: 99.6396% ( 3) 00:07:29.499 16.935 - 17.034: 99.6562% ( 3) 00:07:29.499 17.132 - 17.231: 99.6729% ( 3) 00:07:29.499 17.329 - 17.428: 99.6784% ( 1) 00:07:29.499 17.428 - 17.526: 99.6840% ( 1) 00:07:29.499 17.526 - 17.625: 99.7006% ( 3) 00:07:29.499 17.625 - 17.723: 99.7117% ( 2) 00:07:29.499 17.723 - 17.822: 99.7172% ( 1) 00:07:29.499 18.018 - 18.117: 99.7228% ( 1) 00:07:29.499 18.117 - 18.215: 99.7283% ( 1) 00:07:29.499 18.215 - 18.314: 99.7505% ( 4) 00:07:29.499 18.412 - 18.511: 99.7560% ( 1) 00:07:29.499 18.511 - 18.609: 99.7671% ( 2) 00:07:29.499 18.609 - 18.708: 99.7727% ( 1) 00:07:29.499 18.708 - 18.806: 99.7782% ( 1) 00:07:29.499 18.806 - 18.905: 99.7838% ( 1) 00:07:29.499 18.905 - 19.003: 99.7893% ( 1) 00:07:29.499 19.003 - 19.102: 99.7949% ( 1) 00:07:29.499 19.200 - 19.298: 99.8004% ( 1) 00:07:29.499 19.298 - 19.397: 99.8115% ( 2) 00:07:29.499 19.397 - 19.495: 99.8226% ( 2) 00:07:29.499 19.495 - 19.594: 99.8392% ( 3) 00:07:29.499 19.692 - 19.791: 99.8448% ( 1) 00:07:29.499 19.791 - 19.889: 99.8503% ( 1) 00:07:29.499 19.889 - 19.988: 99.8614% ( 2) 00:07:29.499 20.086 - 20.185: 99.8669% ( 1) 00:07:29.499 20.283 - 20.382: 99.8725% ( 1) 00:07:29.499 20.382 - 20.480: 99.8780% ( 1) 00:07:29.499 20.874 - 20.972: 99.8836% ( 1) 00:07:29.499 21.169 - 21.268: 99.8891% ( 1) 00:07:29.499 21.268 - 21.366: 99.8947% ( 1) 00:07:29.499 22.055 - 22.154: 99.9002% ( 1) 00:07:29.499 22.449 - 22.548: 99.9057% ( 1) 00:07:29.499 23.040 - 23.138: 99.9168% ( 2) 00:07:29.499 23.138 - 23.237: 99.9224% ( 1) 00:07:29.499 23.434 - 23.532: 99.9335% ( 2) 00:07:29.499 23.828 - 23.926: 99.9390% ( 1) 00:07:29.499 25.108 - 25.206: 99.9446% ( 1) 00:07:29.499 25.600 - 25.797: 99.9501% ( 1) 00:07:29.499 27.175 - 27.372: 99.9556% ( 1) 00:07:29.499 28.357 - 28.554: 99.9612% ( 1) 00:07:29.499 29.145 - 29.342: 99.9667% ( 1) 00:07:29.499 30.326 - 30.523: 99.9723% ( 1) 00:07:29.499 40.369 - 40.566: 99.9778% ( 1) 00:07:29.499 46.868 - 47.065: 99.9834% ( 1) 00:07:29.499 48.246 - 48.443: 99.9889% ( 1) 00:07:29.499 50.806 - 51.200: 99.9945% ( 1) 00:07:29.499 130.757 - 131.545: 100.0000% ( 1) 00:07:29.499 00:07:29.499 00:07:29.499 real 0m1.213s 00:07:29.499 user 0m1.077s 00:07:29.499 sys 0m0.086s 00:07:29.499 02:02:53 nvme.nvme_overhead -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:29.499 02:02:53 nvme.nvme_overhead -- common/autotest_common.sh@10 -- # set +x 00:07:29.499 ************************************ 00:07:29.499 END TEST nvme_overhead 00:07:29.499 ************************************ 00:07:29.499 02:02:53 nvme -- nvme/nvme.sh@93 -- # run_test nvme_arbitration /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:29.499 02:02:53 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:07:29.499 02:02:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:29.499 02:02:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:29.499 ************************************ 00:07:29.499 START TEST nvme_arbitration 00:07:29.499 ************************************ 00:07:29.499 02:02:53 nvme.nvme_arbitration -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/arbitration -t 3 -i 0 00:07:32.789 Initializing NVMe Controllers 00:07:32.789 Attached to 0000:00:10.0 00:07:32.789 Attached to 0000:00:11.0 00:07:32.789 Attached to 0000:00:13.0 00:07:32.789 Attached to 0000:00:12.0 00:07:32.789 Associating QEMU NVMe Ctrl (12340 ) with lcore 0 00:07:32.789 Associating QEMU NVMe Ctrl (12341 ) with lcore 1 00:07:32.789 Associating QEMU NVMe Ctrl (12343 ) with lcore 2 00:07:32.789 Associating QEMU NVMe Ctrl (12342 ) with lcore 3 00:07:32.789 Associating QEMU NVMe Ctrl (12342 ) with lcore 0 00:07:32.789 Associating QEMU NVMe Ctrl (12342 ) with lcore 1 00:07:32.789 /home/vagrant/spdk_repo/spdk/build/examples/arbitration run with configuration: 00:07:32.789 /home/vagrant/spdk_repo/spdk/build/examples/arbitration -q 64 -s 131072 -w randrw -M 50 -l 0 -t 3 -c 0xf -m 0 -a 0 -b 0 -n 100000 -i 0 00:07:32.789 Initialization complete. Launching workers. 00:07:32.789 Starting thread on core 1 with urgent priority queue 00:07:32.789 Starting thread on core 2 with urgent priority queue 00:07:32.789 Starting thread on core 3 with urgent priority queue 00:07:32.789 Starting thread on core 0 with urgent priority queue 00:07:32.789 QEMU NVMe Ctrl (12340 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:07:32.789 QEMU NVMe Ctrl (12342 ) core 0: 960.00 IO/s 104.17 secs/100000 ios 00:07:32.789 QEMU NVMe Ctrl (12341 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:07:32.789 QEMU NVMe Ctrl (12342 ) core 1: 960.00 IO/s 104.17 secs/100000 ios 00:07:32.789 QEMU NVMe Ctrl (12343 ) core 2: 938.67 IO/s 106.53 secs/100000 ios 00:07:32.789 QEMU NVMe Ctrl (12342 ) core 3: 960.00 IO/s 104.17 secs/100000 ios 00:07:32.789 ======================================================== 00:07:32.789 00:07:32.789 00:07:32.789 real 0m3.301s 00:07:32.789 user 0m9.296s 00:07:32.789 sys 0m0.099s 00:07:32.789 02:02:57 nvme.nvme_arbitration -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.789 02:02:57 nvme.nvme_arbitration -- common/autotest_common.sh@10 -- # set +x 00:07:32.789 ************************************ 00:07:32.789 END TEST nvme_arbitration 00:07:32.789 ************************************ 00:07:32.789 02:02:57 nvme -- nvme/nvme.sh@94 -- # run_test nvme_single_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:32.789 02:02:57 nvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:07:32.789 02:02:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:32.789 02:02:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:32.789 ************************************ 00:07:32.789 START TEST nvme_single_aen 00:07:32.789 ************************************ 00:07:32.789 02:02:57 nvme.nvme_single_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -T -i 0 00:07:32.789 Asynchronous Event Request test 00:07:32.789 Attached to 0000:00:10.0 00:07:32.789 Attached to 0000:00:11.0 00:07:32.789 Attached to 0000:00:13.0 00:07:32.789 Attached to 0000:00:12.0 00:07:32.789 Reset controller to setup AER completions for this process 00:07:32.789 Registering asynchronous event callbacks... 00:07:32.789 Getting orig temperature thresholds of all controllers 00:07:32.789 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:32.789 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:32.789 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:32.789 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:07:32.789 Setting all controllers temperature threshold low to trigger AER 00:07:32.789 Waiting for all controllers temperature threshold to be set lower 00:07:32.789 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:32.789 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:07:32.789 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:32.789 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:07:32.789 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:32.789 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:07:32.789 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:07:32.789 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:07:32.789 Waiting for all controllers to trigger AER and reset threshold 00:07:32.789 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.789 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.789 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.789 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:07:32.789 Cleaning up... 00:07:32.790 00:07:32.790 real 0m0.217s 00:07:32.790 user 0m0.072s 00:07:32.790 sys 0m0.097s 00:07:32.790 02:02:57 nvme.nvme_single_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:07:32.790 02:02:57 nvme.nvme_single_aen -- common/autotest_common.sh@10 -- # set +x 00:07:32.790 ************************************ 00:07:32.790 END TEST nvme_single_aen 00:07:32.790 ************************************ 00:07:33.049 02:02:57 nvme -- nvme/nvme.sh@95 -- # run_test nvme_doorbell_aers nvme_doorbell_aers 00:07:33.049 02:02:57 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:07:33.049 02:02:57 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:07:33.049 02:02:57 nvme -- common/autotest_common.sh@10 -- # set +x 00:07:33.049 ************************************ 00:07:33.049 START TEST nvme_doorbell_aers 00:07:33.049 ************************************ 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1129 -- # nvme_doorbell_aers 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # bdfs=() 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@70 -- # local bdfs bdf 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # bdfs=($(get_nvme_bdfs)) 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@71 -- # get_nvme_bdfs 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # bdfs=() 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1498 -- # local bdfs 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:33.049 02:02:57 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:10.0' 00:07:33.307 [2024-12-15 02:02:57.818204] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:07:43.276 Executing: test_write_invalid_db 00:07:43.276 Waiting for AER completion... 00:07:43.276 Failure: test_write_invalid_db 00:07:43.276 00:07:43.276 Executing: test_invalid_db_write_overflow_sq 00:07:43.276 Waiting for AER completion... 00:07:43.276 Failure: test_invalid_db_write_overflow_sq 00:07:43.276 00:07:43.276 Executing: test_invalid_db_write_overflow_cq 00:07:43.276 Waiting for AER completion... 00:07:43.276 Failure: test_invalid_db_write_overflow_cq 00:07:43.276 00:07:43.277 02:03:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:43.277 02:03:07 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:11.0' 00:07:43.277 [2024-12-15 02:03:07.863055] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:07:53.244 Executing: test_write_invalid_db 00:07:53.244 Waiting for AER completion... 00:07:53.245 Failure: test_write_invalid_db 00:07:53.245 00:07:53.245 Executing: test_invalid_db_write_overflow_sq 00:07:53.245 Waiting for AER completion... 00:07:53.245 Failure: test_invalid_db_write_overflow_sq 00:07:53.245 00:07:53.245 Executing: test_invalid_db_write_overflow_cq 00:07:53.245 Waiting for AER completion... 00:07:53.245 Failure: test_invalid_db_write_overflow_cq 00:07:53.245 00:07:53.245 02:03:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:07:53.245 02:03:17 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:12.0' 00:07:53.245 [2024-12-15 02:03:17.899435] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:03.214 Executing: test_write_invalid_db 00:08:03.214 Waiting for AER completion... 00:08:03.214 Failure: test_write_invalid_db 00:08:03.214 00:08:03.214 Executing: test_invalid_db_write_overflow_sq 00:08:03.214 Waiting for AER completion... 00:08:03.214 Failure: test_invalid_db_write_overflow_sq 00:08:03.214 00:08:03.214 Executing: test_invalid_db_write_overflow_cq 00:08:03.214 Waiting for AER completion... 00:08:03.214 Failure: test_invalid_db_write_overflow_cq 00:08:03.214 00:08:03.214 02:03:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@72 -- # for bdf in "${bdfs[@]}" 00:08:03.214 02:03:27 nvme.nvme_doorbell_aers -- nvme/nvme.sh@73 -- # timeout --preserve-status 10 /home/vagrant/spdk_repo/spdk/test/nvme/doorbell_aers/doorbell_aers -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:03.214 [2024-12-15 02:03:27.944443] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.182 Executing: test_write_invalid_db 00:08:13.182 Waiting for AER completion... 00:08:13.182 Failure: test_write_invalid_db 00:08:13.182 00:08:13.182 Executing: test_invalid_db_write_overflow_sq 00:08:13.182 Waiting for AER completion... 00:08:13.182 Failure: test_invalid_db_write_overflow_sq 00:08:13.182 00:08:13.182 Executing: test_invalid_db_write_overflow_cq 00:08:13.182 Waiting for AER completion... 00:08:13.182 Failure: test_invalid_db_write_overflow_cq 00:08:13.182 00:08:13.182 00:08:13.182 real 0m40.184s 00:08:13.182 user 0m34.177s 00:08:13.182 sys 0m5.611s 00:08:13.182 02:03:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.182 02:03:37 nvme.nvme_doorbell_aers -- common/autotest_common.sh@10 -- # set +x 00:08:13.182 ************************************ 00:08:13.182 END TEST nvme_doorbell_aers 00:08:13.182 ************************************ 00:08:13.182 02:03:37 nvme -- nvme/nvme.sh@97 -- # uname 00:08:13.182 02:03:37 nvme -- nvme/nvme.sh@97 -- # '[' Linux '!=' FreeBSD ']' 00:08:13.182 02:03:37 nvme -- nvme/nvme.sh@98 -- # run_test nvme_multi_aen /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:13.182 02:03:37 nvme -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:08:13.182 02:03:37 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.182 02:03:37 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.182 ************************************ 00:08:13.182 START TEST nvme_multi_aen 00:08:13.182 ************************************ 00:08:13.182 02:03:37 nvme.nvme_multi_aen -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/aer/aer -m -T -i 0 00:08:13.439 [2024-12-15 02:03:37.963315] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.963374] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.963388] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.964991] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.965036] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.965049] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.966221] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.966253] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.966262] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.967337] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.967366] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 [2024-12-15 02:03:37.967375] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 64973) is not found. Dropping the request. 00:08:13.439 Child process pid: 65499 00:08:13.439 [Child] Asynchronous Event Request test 00:08:13.439 [Child] Attached to 0000:00:10.0 00:08:13.439 [Child] Attached to 0000:00:11.0 00:08:13.439 [Child] Attached to 0000:00:13.0 00:08:13.439 [Child] Attached to 0000:00:12.0 00:08:13.439 [Child] Registering asynchronous event callbacks... 00:08:13.439 [Child] Getting orig temperature thresholds of all controllers 00:08:13.440 [Child] 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.440 [Child] 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.440 [Child] 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.440 [Child] 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.440 [Child] Waiting for all controllers to trigger AER and reset threshold 00:08:13.440 [Child] 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.440 [Child] 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.440 [Child] 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.440 [Child] 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.440 [Child] 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.440 [Child] 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.440 [Child] 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.440 [Child] 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.440 [Child] Cleaning up... 00:08:13.698 Asynchronous Event Request test 00:08:13.698 Attached to 0000:00:10.0 00:08:13.698 Attached to 0000:00:11.0 00:08:13.698 Attached to 0000:00:13.0 00:08:13.698 Attached to 0000:00:12.0 00:08:13.698 Reset controller to setup AER completions for this process 00:08:13.698 Registering asynchronous event callbacks... 00:08:13.698 Getting orig temperature thresholds of all controllers 00:08:13.698 0000:00:10.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.698 0000:00:11.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.698 0000:00:13.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.698 0000:00:12.0: original temperature threshold: 343 Kelvin (70 Celsius) 00:08:13.698 Setting all controllers temperature threshold low to trigger AER 00:08:13.698 Waiting for all controllers temperature threshold to be set lower 00:08:13.698 0000:00:10.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.698 aer_cb - Resetting Temp Threshold for device: 0000:00:10.0 00:08:13.698 0000:00:11.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.698 aer_cb - Resetting Temp Threshold for device: 0000:00:11.0 00:08:13.698 0000:00:13.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.698 aer_cb - Resetting Temp Threshold for device: 0000:00:13.0 00:08:13.698 0000:00:12.0: aer_cb for log page 2, aen_event_type: 0x01, aen_event_info: 0x01 00:08:13.698 aer_cb - Resetting Temp Threshold for device: 0000:00:12.0 00:08:13.698 Waiting for all controllers to trigger AER and reset threshold 00:08:13.698 0000:00:10.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.698 0000:00:11.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.698 0000:00:13.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.698 0000:00:12.0: Current Temperature: 323 Kelvin (50 Celsius) 00:08:13.698 Cleaning up... 00:08:13.698 00:08:13.698 real 0m0.425s 00:08:13.698 user 0m0.143s 00:08:13.698 sys 0m0.180s 00:08:13.698 02:03:38 nvme.nvme_multi_aen -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.698 02:03:38 nvme.nvme_multi_aen -- common/autotest_common.sh@10 -- # set +x 00:08:13.698 ************************************ 00:08:13.698 END TEST nvme_multi_aen 00:08:13.698 ************************************ 00:08:13.698 02:03:38 nvme -- nvme/nvme.sh@99 -- # run_test nvme_startup /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:13.698 02:03:38 nvme -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:08:13.698 02:03:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.698 02:03:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.698 ************************************ 00:08:13.698 START TEST nvme_startup 00:08:13.698 ************************************ 00:08:13.698 02:03:38 nvme.nvme_startup -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/startup/startup -t 1000000 00:08:13.698 Initializing NVMe Controllers 00:08:13.698 Attached to 0000:00:10.0 00:08:13.698 Attached to 0000:00:11.0 00:08:13.698 Attached to 0000:00:13.0 00:08:13.698 Attached to 0000:00:12.0 00:08:13.698 Initialization complete. 00:08:13.698 Time used:138199.062 (us). 00:08:13.698 00:08:13.698 real 0m0.199s 00:08:13.698 user 0m0.072s 00:08:13.698 sys 0m0.086s 00:08:13.698 02:03:38 nvme.nvme_startup -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:13.698 ************************************ 00:08:13.698 END TEST nvme_startup 00:08:13.698 02:03:38 nvme.nvme_startup -- common/autotest_common.sh@10 -- # set +x 00:08:13.698 ************************************ 00:08:13.957 02:03:38 nvme -- nvme/nvme.sh@100 -- # run_test nvme_multi_secondary nvme_multi_secondary 00:08:13.957 02:03:38 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:13.957 02:03:38 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:13.957 02:03:38 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:13.957 ************************************ 00:08:13.957 START TEST nvme_multi_secondary 00:08:13.957 ************************************ 00:08:13.957 02:03:38 nvme.nvme_multi_secondary -- common/autotest_common.sh@1129 -- # nvme_multi_secondary 00:08:13.957 02:03:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@52 -- # pid0=65549 00:08:13.957 02:03:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@51 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x1 00:08:13.957 02:03:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@54 -- # pid1=65550 00:08:13.957 02:03:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@55 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x4 00:08:13.957 02:03:38 nvme.nvme_multi_secondary -- nvme/nvme.sh@53 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:17.235 Initializing NVMe Controllers 00:08:17.235 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:17.235 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:17.235 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:17.235 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:17.235 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:17.235 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:17.235 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:17.235 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:17.235 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:17.235 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:17.235 Initialization complete. Launching workers. 00:08:17.235 ======================================================== 00:08:17.235 Latency(us) 00:08:17.235 Device Information : IOPS MiB/s Average min max 00:08:17.235 PCIE (0000:00:10.0) NSID 1 from core 1: 7941.20 31.02 2013.47 711.58 6584.91 00:08:17.235 PCIE (0000:00:11.0) NSID 1 from core 1: 7941.20 31.02 2014.44 732.41 6823.95 00:08:17.236 PCIE (0000:00:13.0) NSID 1 from core 1: 7941.20 31.02 2014.43 738.33 6964.74 00:08:17.236 PCIE (0000:00:12.0) NSID 1 from core 1: 7941.20 31.02 2014.50 723.55 6570.00 00:08:17.236 PCIE (0000:00:12.0) NSID 2 from core 1: 7941.20 31.02 2014.50 727.22 5895.30 00:08:17.236 PCIE (0000:00:12.0) NSID 3 from core 1: 7941.20 31.02 2014.52 734.11 6009.52 00:08:17.236 ======================================================== 00:08:17.236 Total : 47647.22 186.12 2014.31 711.58 6964.74 00:08:17.236 00:08:17.236 Initializing NVMe Controllers 00:08:17.236 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:17.236 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:17.236 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:17.236 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:17.236 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:17.236 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:17.236 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:17.236 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:17.236 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:17.236 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:17.236 Initialization complete. Launching workers. 00:08:17.236 ======================================================== 00:08:17.236 Latency(us) 00:08:17.236 Device Information : IOPS MiB/s Average min max 00:08:17.236 PCIE (0000:00:10.0) NSID 1 from core 2: 3333.13 13.02 4798.63 732.68 13636.16 00:08:17.236 PCIE (0000:00:11.0) NSID 1 from core 2: 3333.13 13.02 4799.99 751.65 13657.72 00:08:17.236 PCIE (0000:00:13.0) NSID 1 from core 2: 3333.13 13.02 4800.06 740.66 13541.91 00:08:17.236 PCIE (0000:00:12.0) NSID 1 from core 2: 3333.13 13.02 4800.02 744.96 14380.93 00:08:17.236 PCIE (0000:00:12.0) NSID 2 from core 2: 3333.13 13.02 4799.96 744.37 13973.11 00:08:17.236 PCIE (0000:00:12.0) NSID 3 from core 2: 3333.13 13.02 4799.91 738.03 13553.15 00:08:17.236 ======================================================== 00:08:17.236 Total : 19998.76 78.12 4799.76 732.68 14380.93 00:08:17.236 00:08:17.236 02:03:41 nvme.nvme_multi_secondary -- nvme/nvme.sh@56 -- # wait 65549 00:08:19.135 Initializing NVMe Controllers 00:08:19.135 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:19.135 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:19.135 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:19.135 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:19.135 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:19.135 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:19.135 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:19.135 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:19.135 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:19.135 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:19.135 Initialization complete. Launching workers. 00:08:19.135 ======================================================== 00:08:19.135 Latency(us) 00:08:19.135 Device Information : IOPS MiB/s Average min max 00:08:19.135 PCIE (0000:00:10.0) NSID 1 from core 0: 10661.13 41.65 1499.49 695.66 6387.66 00:08:19.135 PCIE (0000:00:11.0) NSID 1 from core 0: 10661.13 41.65 1500.36 684.41 6331.76 00:08:19.135 PCIE (0000:00:13.0) NSID 1 from core 0: 10661.13 41.65 1500.33 636.98 6343.15 00:08:19.135 PCIE (0000:00:12.0) NSID 1 from core 0: 10661.13 41.65 1500.30 627.03 6886.06 00:08:19.135 PCIE (0000:00:12.0) NSID 2 from core 0: 10661.13 41.65 1500.28 600.79 6691.89 00:08:19.135 PCIE (0000:00:12.0) NSID 3 from core 0: 10661.13 41.65 1500.25 565.71 6534.75 00:08:19.135 ======================================================== 00:08:19.135 Total : 63966.80 249.87 1500.17 565.71 6886.06 00:08:19.135 00:08:19.135 02:03:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@57 -- # wait 65550 00:08:19.135 02:03:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@61 -- # pid0=65625 00:08:19.135 02:03:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@63 -- # pid1=65626 00:08:19.135 02:03:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@64 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 5 -c 0x4 00:08:19.135 02:03:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@62 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x2 00:08:19.135 02:03:43 nvme.nvme_multi_secondary -- nvme/nvme.sh@60 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_perf -i 0 -q 16 -w read -o 4096 -t 3 -c 0x1 00:08:22.417 Initializing NVMe Controllers 00:08:22.417 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:22.417 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:22.417 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.417 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.417 Associating PCIE (0000:00:10.0) NSID 1 with lcore 1 00:08:22.417 Associating PCIE (0000:00:11.0) NSID 1 with lcore 1 00:08:22.417 Associating PCIE (0000:00:13.0) NSID 1 with lcore 1 00:08:22.417 Associating PCIE (0000:00:12.0) NSID 1 with lcore 1 00:08:22.417 Associating PCIE (0000:00:12.0) NSID 2 with lcore 1 00:08:22.417 Associating PCIE (0000:00:12.0) NSID 3 with lcore 1 00:08:22.417 Initialization complete. Launching workers. 00:08:22.417 ======================================================== 00:08:22.417 Latency(us) 00:08:22.417 Device Information : IOPS MiB/s Average min max 00:08:22.417 PCIE (0000:00:10.0) NSID 1 from core 1: 7969.71 31.13 2006.30 713.34 5510.21 00:08:22.417 PCIE (0000:00:11.0) NSID 1 from core 1: 7969.71 31.13 2007.30 732.49 5322.82 00:08:22.417 PCIE (0000:00:13.0) NSID 1 from core 1: 7969.71 31.13 2007.35 741.17 5471.63 00:08:22.417 PCIE (0000:00:12.0) NSID 1 from core 1: 7969.71 31.13 2007.30 743.74 5430.29 00:08:22.417 PCIE (0000:00:12.0) NSID 2 from core 1: 7969.71 31.13 2007.35 746.25 5038.95 00:08:22.417 PCIE (0000:00:12.0) NSID 3 from core 1: 7969.71 31.13 2007.30 728.30 5009.67 00:08:22.417 ======================================================== 00:08:22.417 Total : 47818.24 186.79 2007.15 713.34 5510.21 00:08:22.417 00:08:22.417 Initializing NVMe Controllers 00:08:22.417 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:22.417 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:22.418 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:22.418 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:22.418 Associating PCIE (0000:00:10.0) NSID 1 with lcore 0 00:08:22.418 Associating PCIE (0000:00:11.0) NSID 1 with lcore 0 00:08:22.418 Associating PCIE (0000:00:13.0) NSID 1 with lcore 0 00:08:22.418 Associating PCIE (0000:00:12.0) NSID 1 with lcore 0 00:08:22.418 Associating PCIE (0000:00:12.0) NSID 2 with lcore 0 00:08:22.418 Associating PCIE (0000:00:12.0) NSID 3 with lcore 0 00:08:22.418 Initialization complete. Launching workers. 00:08:22.418 ======================================================== 00:08:22.418 Latency(us) 00:08:22.418 Device Information : IOPS MiB/s Average min max 00:08:22.418 PCIE (0000:00:10.0) NSID 1 from core 0: 7674.59 29.98 2083.36 705.92 6127.04 00:08:22.418 PCIE (0000:00:11.0) NSID 1 from core 0: 7674.59 29.98 2084.16 713.66 6136.88 00:08:22.418 PCIE (0000:00:13.0) NSID 1 from core 0: 7674.59 29.98 2084.00 671.04 6157.70 00:08:22.418 PCIE (0000:00:12.0) NSID 1 from core 0: 7674.59 29.98 2083.84 636.41 5801.08 00:08:22.418 PCIE (0000:00:12.0) NSID 2 from core 0: 7674.59 29.98 2083.69 599.21 6115.95 00:08:22.418 PCIE (0000:00:12.0) NSID 3 from core 0: 7674.59 29.98 2083.54 572.31 6158.12 00:08:22.418 ======================================================== 00:08:22.418 Total : 46047.55 179.87 2083.76 572.31 6158.12 00:08:22.418 00:08:24.317 Initializing NVMe Controllers 00:08:24.317 Attached to NVMe Controller at 0000:00:10.0 [1b36:0010] 00:08:24.317 Attached to NVMe Controller at 0000:00:11.0 [1b36:0010] 00:08:24.317 Attached to NVMe Controller at 0000:00:13.0 [1b36:0010] 00:08:24.317 Attached to NVMe Controller at 0000:00:12.0 [1b36:0010] 00:08:24.317 Associating PCIE (0000:00:10.0) NSID 1 with lcore 2 00:08:24.317 Associating PCIE (0000:00:11.0) NSID 1 with lcore 2 00:08:24.317 Associating PCIE (0000:00:13.0) NSID 1 with lcore 2 00:08:24.317 Associating PCIE (0000:00:12.0) NSID 1 with lcore 2 00:08:24.317 Associating PCIE (0000:00:12.0) NSID 2 with lcore 2 00:08:24.317 Associating PCIE (0000:00:12.0) NSID 3 with lcore 2 00:08:24.317 Initialization complete. Launching workers. 00:08:24.317 ======================================================== 00:08:24.317 Latency(us) 00:08:24.317 Device Information : IOPS MiB/s Average min max 00:08:24.317 PCIE (0000:00:10.0) NSID 1 from core 2: 4625.90 18.07 3456.88 724.26 12888.30 00:08:24.317 PCIE (0000:00:11.0) NSID 1 from core 2: 4625.90 18.07 3458.47 697.00 12971.84 00:08:24.317 PCIE (0000:00:13.0) NSID 1 from core 2: 4625.90 18.07 3458.07 732.42 13309.19 00:08:24.317 PCIE (0000:00:12.0) NSID 1 from core 2: 4625.90 18.07 3455.59 727.40 12019.46 00:08:24.317 PCIE (0000:00:12.0) NSID 2 from core 2: 4625.90 18.07 3455.37 729.60 12675.93 00:08:24.317 PCIE (0000:00:12.0) NSID 3 from core 2: 4625.90 18.07 3455.15 738.91 12919.80 00:08:24.317 ======================================================== 00:08:24.317 Total : 27755.38 108.42 3456.59 697.00 13309.19 00:08:24.317 00:08:24.317 02:03:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@65 -- # wait 65625 00:08:24.317 02:03:49 nvme.nvme_multi_secondary -- nvme/nvme.sh@66 -- # wait 65626 00:08:24.317 00:08:24.317 real 0m10.579s 00:08:24.317 user 0m18.410s 00:08:24.317 sys 0m0.568s 00:08:24.317 02:03:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:24.317 ************************************ 00:08:24.317 END TEST nvme_multi_secondary 00:08:24.317 02:03:49 nvme.nvme_multi_secondary -- common/autotest_common.sh@10 -- # set +x 00:08:24.317 ************************************ 00:08:24.577 02:03:49 nvme -- nvme/nvme.sh@101 -- # trap - SIGINT SIGTERM EXIT 00:08:24.577 02:03:49 nvme -- nvme/nvme.sh@102 -- # kill_stub 00:08:24.577 02:03:49 nvme -- common/autotest_common.sh@1093 -- # [[ -e /proc/64582 ]] 00:08:24.577 02:03:49 nvme -- common/autotest_common.sh@1094 -- # kill 64582 00:08:24.577 02:03:49 nvme -- common/autotest_common.sh@1095 -- # wait 64582 00:08:24.577 [2024-12-15 02:03:49.107842] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.577 [2024-12-15 02:03:49.108340] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.108378] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.108392] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.109927] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.109966] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.109978] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.109989] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.111532] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.111567] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.111578] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.111589] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.113137] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.113175] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.113186] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 [2024-12-15 02:03:49.113209] nvme_pcie_common.c: 321:nvme_pcie_qpair_insert_pending_admin_request: *ERROR*: The owning process (pid 65498) is not found. Dropping the request. 00:08:24.578 02:03:49 nvme -- common/autotest_common.sh@1097 -- # rm -f /var/run/spdk_stub0 00:08:24.578 02:03:49 nvme -- common/autotest_common.sh@1101 -- # echo 2 00:08:24.578 02:03:49 nvme -- nvme/nvme.sh@105 -- # run_test bdev_nvme_reset_stuck_adm_cmd /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:24.578 02:03:49 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:24.578 02:03:49 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:24.578 02:03:49 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:24.578 ************************************ 00:08:24.578 START TEST bdev_nvme_reset_stuck_adm_cmd 00:08:24.578 ************************************ 00:08:24.578 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_reset_stuck_adm_cmd.sh 00:08:24.578 * Looking for test storage... 00:08:24.578 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:24.578 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:24.578 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lcov --version 00:08:24.578 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # IFS=.-: 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@336 -- # read -ra ver1 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # IFS=.-: 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@337 -- # read -ra ver2 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@338 -- # local 'op=<' 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@340 -- # ver1_l=2 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@341 -- # ver2_l=1 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@344 -- # case "$op" in 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@345 -- # : 1 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # decimal 1 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=1 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 1 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@365 -- # ver1[v]=1 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # decimal 2 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@353 -- # local d=2 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@355 -- # echo 2 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@366 -- # ver2[v]=2 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- scripts/common.sh@368 -- # return 0 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:24.839 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:24.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.840 --rc genhtml_branch_coverage=1 00:08:24.840 --rc genhtml_function_coverage=1 00:08:24.840 --rc genhtml_legend=1 00:08:24.840 --rc geninfo_all_blocks=1 00:08:24.840 --rc geninfo_unexecuted_blocks=1 00:08:24.840 00:08:24.840 ' 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:24.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.840 --rc genhtml_branch_coverage=1 00:08:24.840 --rc genhtml_function_coverage=1 00:08:24.840 --rc genhtml_legend=1 00:08:24.840 --rc geninfo_all_blocks=1 00:08:24.840 --rc geninfo_unexecuted_blocks=1 00:08:24.840 00:08:24.840 ' 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:24.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.840 --rc genhtml_branch_coverage=1 00:08:24.840 --rc genhtml_function_coverage=1 00:08:24.840 --rc genhtml_legend=1 00:08:24.840 --rc geninfo_all_blocks=1 00:08:24.840 --rc geninfo_unexecuted_blocks=1 00:08:24.840 00:08:24.840 ' 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:24.840 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:24.840 --rc genhtml_branch_coverage=1 00:08:24.840 --rc genhtml_function_coverage=1 00:08:24.840 --rc genhtml_legend=1 00:08:24.840 --rc geninfo_all_blocks=1 00:08:24.840 --rc geninfo_unexecuted_blocks=1 00:08:24.840 00:08:24.840 ' 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@18 -- # ctrlr_name=nvme0 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@20 -- # err_injection_timeout=15000000 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@22 -- # test_timeout=5 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@25 -- # err_injection_sct=0 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@27 -- # err_injection_sc=1 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # get_first_nvme_bdf 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # bdfs=() 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1509 -- # local bdfs 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1498 -- # local bdfs 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:08:24.840 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@29 -- # bdf=0000:00:10.0 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@30 -- # '[' -z 0000:00:10.0 ']' 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@36 -- # spdk_target_pid=65782 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0xF 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@37 -- # trap 'killprocess "$spdk_target_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@38 -- # waitforlisten 65782 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@835 -- # '[' -z 65782 ']' 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@840 -- # local max_retries=100 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@844 -- # xtrace_disable 00:08:24.840 02:03:49 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:24.840 [2024-12-15 02:03:49.533751] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:08:24.840 [2024-12-15 02:03:49.534025] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0xF --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid65782 ] 00:08:25.101 [2024-12-15 02:03:49.704455] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 4 00:08:25.101 [2024-12-15 02:03:49.805496] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:08:25.101 [2024-12-15 02:03:49.805804] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:08:25.101 [2024-12-15 02:03:49.806081] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 3 00:08:25.101 [2024-12-15 02:03:49.806175] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:08:25.672 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:08:25.672 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@868 -- # return 0 00:08:25.672 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@40 -- # rpc_cmd bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:10.0 00:08:25.672 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.672 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:25.935 nvme0n1 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # mktemp /tmp/err_inj_XXXXX.txt 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@41 -- # tmp_file=/tmp/err_inj_WHAew.txt 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@44 -- # rpc_cmd bdev_nvme_add_error_injection -n nvme0 --cmd-type admin --opc 10 --timeout-in-us 15000000 --err-count 1 --sct 0 --sc 1 --do_not_submit 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:25.935 true 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # date +%s 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@45 -- # start_time=1734228230 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@51 -- # get_feat_pid=65805 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@52 -- # trap 'killprocess "$get_feat_pid"; exit 1' SIGINT SIGTERM EXIT 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@55 -- # sleep 2 00:08:25.935 02:03:50 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_send_cmd -n nvme0 -t admin -r c2h -c CgAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAcAAAAAAAAAAAAAAAAAAAAAAAAAAAAAAA== 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@57 -- # rpc_cmd bdev_nvme_reset_controller nvme0 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.844 [2024-12-15 02:03:52.505867] nvme_ctrlr.c:1728:nvme_ctrlr_disconnect: *NOTICE*: [0000:00:10.0, 0] resetting controller 00:08:27.844 [2024-12-15 02:03:52.506273] nvme_qpair.c: 558:nvme_qpair_manual_complete_request: *NOTICE*: Command completed manually: 00:08:27.844 [2024-12-15 02:03:52.506309] nvme_qpair.c: 213:nvme_admin_qpair_print_command: *NOTICE*: GET FEATURES NUMBER OF QUEUES cid:0 cdw10:00000007 PRP1 0x0 PRP2 0x0 00:08:27.844 [2024-12-15 02:03:52.506336] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: INVALID OPCODE (00/01) qid:0 cid:0 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:08:27.844 [2024-12-15 02:03:52.507869] bdev_nvme.c:2287:bdev_nvme_reset_ctrlr_complete: *NOTICE*: [0000:00:10.0, 0] Resetting controller successful. 00:08:27.844 Waiting for RPC error injection (bdev_nvme_send_cmd) process PID: 65805 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@59 -- # echo 'Waiting for RPC error injection (bdev_nvme_send_cmd) process PID:' 65805 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@60 -- # wait 65805 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # date +%s 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@61 -- # diff_time=2 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@62 -- # rpc_cmd bdev_nvme_detach_controller nvme0 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@563 -- # xtrace_disable 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@64 -- # trap - SIGINT SIGTERM EXIT 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # jq -r .cpl /tmp/err_inj_WHAew.txt 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@67 -- # spdk_nvme_status=AAAAAAAAAAAAAAAAAAACAA== 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 1 255 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:27.844 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 1 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@68 -- # nvme_status_sc=0x1 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # base64_decode_bits AAAAAAAAAAAAAAAAAAACAA== 9 3 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@11 -- # local bin_array status 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # bin_array=($(base64 -d <(printf '%s' "$1") | hexdump -ve '/1 "0x%02x\n"')) 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # base64 -d /dev/fd/63 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # printf %s AAAAAAAAAAAAAAAAAAACAA== 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@13 -- # hexdump -ve '/1 "0x%02x\n"' 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@14 -- # status=2 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@15 -- # printf 0x%x 0 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@69 -- # nvme_status_sct=0x0 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@71 -- # rm -f /tmp/err_inj_WHAew.txt 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@73 -- # killprocess 65782 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@954 -- # '[' -z 65782 ']' 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@958 -- # kill -0 65782 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # uname 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 65782 00:08:27.845 killing process with pid 65782 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 65782' 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@973 -- # kill 65782 00:08:27.845 02:03:52 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@978 -- # wait 65782 00:08:29.218 02:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@75 -- # (( err_injection_sc != nvme_status_sc || err_injection_sct != nvme_status_sct )) 00:08:29.218 02:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- nvme/nvme_reset_stuck_adm_cmd.sh@79 -- # (( diff_time > test_timeout )) 00:08:29.218 ************************************ 00:08:29.218 END TEST bdev_nvme_reset_stuck_adm_cmd 00:08:29.218 ************************************ 00:08:29.218 00:08:29.218 real 0m4.584s 00:08:29.218 user 0m16.305s 00:08:29.218 sys 0m0.442s 00:08:29.218 02:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:29.218 02:03:53 nvme.bdev_nvme_reset_stuck_adm_cmd -- common/autotest_common.sh@10 -- # set +x 00:08:29.218 02:03:53 nvme -- nvme/nvme.sh@107 -- # [[ y == y ]] 00:08:29.218 02:03:53 nvme -- nvme/nvme.sh@108 -- # run_test nvme_fio nvme_fio_test 00:08:29.218 02:03:53 nvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:29.218 02:03:53 nvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:29.218 02:03:53 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:29.218 ************************************ 00:08:29.218 START TEST nvme_fio 00:08:29.218 ************************************ 00:08:29.218 02:03:53 nvme.nvme_fio -- common/autotest_common.sh@1129 -- # nvme_fio_test 00:08:29.218 02:03:53 nvme.nvme_fio -- nvme/nvme.sh@31 -- # PLUGIN_DIR=/home/vagrant/spdk_repo/spdk/app/fio/nvme 00:08:29.218 02:03:53 nvme.nvme_fio -- nvme/nvme.sh@32 -- # ran_fio=false 00:08:29.218 02:03:53 nvme.nvme_fio -- nvme/nvme.sh@33 -- # get_nvme_bdfs 00:08:29.218 02:03:53 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # bdfs=() 00:08:29.218 02:03:53 nvme.nvme_fio -- common/autotest_common.sh@1498 -- # local bdfs 00:08:29.218 02:03:53 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:08:29.218 02:03:53 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:08:29.218 02:03:53 nvme.nvme_fio -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:08:29.218 02:03:53 nvme.nvme_fio -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:08:29.218 02:03:53 nvme.nvme_fio -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:08:29.219 02:03:53 nvme.nvme_fio -- nvme/nvme.sh@33 -- # bdfs=('0000:00:10.0' '0000:00:11.0' '0000:00:12.0' '0000:00:13.0') 00:08:29.219 02:03:53 nvme.nvme_fio -- nvme/nvme.sh@33 -- # local bdfs bdf 00:08:29.219 02:03:53 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:29.219 02:03:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:29.219 02:03:53 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:29.480 02:03:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:10.0' 00:08:29.480 02:03:54 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:29.740 02:03:54 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:29.740 02:03:54 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:29.740 02:03:54 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.10.0' --bs=4096 00:08:30.000 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:30.000 fio-3.35 00:08:30.000 Starting 1 thread 00:08:36.658 00:08:36.658 test: (groupid=0, jobs=1): err= 0: pid=65939: Sun Dec 15 02:04:00 2024 00:08:36.658 read: IOPS=23.5k, BW=91.7MiB/s (96.2MB/s)(184MiB/2001msec) 00:08:36.658 slat (nsec): min=4207, max=50843, avg=4952.87, stdev=1948.45 00:08:36.658 clat (usec): min=457, max=9461, avg=2716.79, stdev=844.28 00:08:36.658 lat (usec): min=461, max=9511, avg=2721.74, stdev=845.24 00:08:36.658 clat percentiles (usec): 00:08:36.658 | 1.00th=[ 1467], 5.00th=[ 2089], 10.00th=[ 2245], 20.00th=[ 2278], 00:08:36.658 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2409], 60.00th=[ 2507], 00:08:36.658 | 70.00th=[ 2638], 80.00th=[ 2900], 90.00th=[ 3720], 95.00th=[ 4817], 00:08:36.658 | 99.00th=[ 5932], 99.50th=[ 6259], 99.90th=[ 7242], 99.95th=[ 8160], 00:08:36.658 | 99.99th=[ 9372] 00:08:36.658 bw ( KiB/s): min=79632, max=102736, per=100.00%, avg=94386.67, stdev=12814.83, samples=3 00:08:36.658 iops : min=19908, max=25684, avg=23596.67, stdev=3203.71, samples=3 00:08:36.658 write: IOPS=23.3k, BW=91.1MiB/s (95.5MB/s)(182MiB/2001msec); 0 zone resets 00:08:36.658 slat (nsec): min=4280, max=87982, avg=5164.90, stdev=2074.22 00:08:36.658 clat (usec): min=485, max=9385, avg=2730.62, stdev=857.83 00:08:36.658 lat (usec): min=490, max=9405, avg=2735.79, stdev=858.76 00:08:36.658 clat percentiles (usec): 00:08:36.658 | 1.00th=[ 1483], 5.00th=[ 2114], 10.00th=[ 2245], 20.00th=[ 2311], 00:08:36.658 | 30.00th=[ 2343], 40.00th=[ 2376], 50.00th=[ 2442], 60.00th=[ 2507], 00:08:36.658 | 70.00th=[ 2671], 80.00th=[ 2933], 90.00th=[ 3720], 95.00th=[ 4817], 00:08:36.658 | 99.00th=[ 5997], 99.50th=[ 6390], 99.90th=[ 8029], 99.95th=[ 8291], 00:08:36.658 | 99.99th=[ 9241] 00:08:36.658 bw ( KiB/s): min=79552, max=101848, per=100.00%, avg=94373.33, stdev=12835.81, samples=3 00:08:36.658 iops : min=19888, max=25462, avg=23593.33, stdev=3208.95, samples=3 00:08:36.658 lat (usec) : 500=0.01%, 750=0.03%, 1000=0.10% 00:08:36.658 lat (msec) : 2=3.71%, 4=87.45%, 10=8.70% 00:08:36.658 cpu : usr=99.20%, sys=0.05%, ctx=3, majf=0, minf=608 00:08:36.658 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:36.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:36.658 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:36.658 issued rwts: total=46989,46663,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:36.658 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:36.658 00:08:36.658 Run status group 0 (all jobs): 00:08:36.658 READ: bw=91.7MiB/s (96.2MB/s), 91.7MiB/s-91.7MiB/s (96.2MB/s-96.2MB/s), io=184MiB (192MB), run=2001-2001msec 00:08:36.658 WRITE: bw=91.1MiB/s (95.5MB/s), 91.1MiB/s-91.1MiB/s (95.5MB/s-95.5MB/s), io=182MiB (191MB), run=2001-2001msec 00:08:36.658 ----------------------------------------------------- 00:08:36.658 Suppressions used: 00:08:36.658 count bytes template 00:08:36.658 1 32 /usr/src/fio/parse.c 00:08:36.658 1 8 libtcmalloc_minimal.so 00:08:36.658 ----------------------------------------------------- 00:08:36.658 00:08:36.658 02:04:00 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:36.658 02:04:00 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:36.658 02:04:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:36.658 02:04:00 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:36.658 02:04:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:11.0' 00:08:36.658 02:04:00 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:36.658 02:04:01 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:36.658 02:04:01 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:36.658 02:04:01 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.11.0' --bs=4096 00:08:36.658 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:36.658 fio-3.35 00:08:36.658 Starting 1 thread 00:08:43.240 00:08:43.240 test: (groupid=0, jobs=1): err= 0: pid=66000: Sun Dec 15 02:04:06 2024 00:08:43.240 read: IOPS=20.3k, BW=79.4MiB/s (83.3MB/s)(159MiB/2001msec) 00:08:43.240 slat (nsec): min=3325, max=73135, avg=5394.68, stdev=2862.21 00:08:43.240 clat (usec): min=260, max=14415, avg=3130.39, stdev=1206.86 00:08:43.240 lat (usec): min=265, max=14471, avg=3135.79, stdev=1208.35 00:08:43.240 clat percentiles (usec): 00:08:43.240 | 1.00th=[ 1893], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2409], 00:08:43.240 | 30.00th=[ 2474], 40.00th=[ 2573], 50.00th=[ 2704], 60.00th=[ 2868], 00:08:43.240 | 70.00th=[ 3097], 80.00th=[ 3490], 90.00th=[ 4817], 95.00th=[ 5866], 00:08:43.240 | 99.00th=[ 7373], 99.50th=[ 8356], 99.90th=[ 9896], 99.95th=[11207], 00:08:43.240 | 99.99th=[14091] 00:08:43.240 bw ( KiB/s): min=76232, max=84248, per=100.00%, avg=81397.33, stdev=4481.33, samples=3 00:08:43.240 iops : min=19058, max=21062, avg=20349.33, stdev=1120.33, samples=3 00:08:43.240 write: IOPS=20.3k, BW=79.3MiB/s (83.1MB/s)(159MiB/2001msec); 0 zone resets 00:08:43.240 slat (nsec): min=3416, max=82971, avg=5520.75, stdev=2865.45 00:08:43.240 clat (usec): min=282, max=14238, avg=3146.78, stdev=1218.19 00:08:43.240 lat (usec): min=287, max=14254, avg=3152.30, stdev=1219.66 00:08:43.240 clat percentiles (usec): 00:08:43.240 | 1.00th=[ 1893], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2409], 00:08:43.240 | 30.00th=[ 2507], 40.00th=[ 2606], 50.00th=[ 2737], 60.00th=[ 2900], 00:08:43.240 | 70.00th=[ 3097], 80.00th=[ 3490], 90.00th=[ 4817], 95.00th=[ 5932], 00:08:43.240 | 99.00th=[ 7439], 99.50th=[ 8848], 99.90th=[10028], 99.95th=[11469], 00:08:43.240 | 99.99th=[13698] 00:08:43.240 bw ( KiB/s): min=76176, max=84384, per=100.00%, avg=81514.67, stdev=4627.74, samples=3 00:08:43.240 iops : min=19044, max=21096, avg=20378.67, stdev=1156.94, samples=3 00:08:43.240 lat (usec) : 500=0.02%, 750=0.01%, 1000=0.01% 00:08:43.240 lat (msec) : 2=1.74%, 4=82.28%, 10=15.85%, 20=0.09% 00:08:43.240 cpu : usr=98.90%, sys=0.10%, ctx=6, majf=0, minf=607 00:08:43.240 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:43.240 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:43.240 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:43.240 issued rwts: total=40694,40600,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:43.240 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:43.240 00:08:43.240 Run status group 0 (all jobs): 00:08:43.240 READ: bw=79.4MiB/s (83.3MB/s), 79.4MiB/s-79.4MiB/s (83.3MB/s-83.3MB/s), io=159MiB (167MB), run=2001-2001msec 00:08:43.240 WRITE: bw=79.3MiB/s (83.1MB/s), 79.3MiB/s-79.3MiB/s (83.1MB/s-83.1MB/s), io=159MiB (166MB), run=2001-2001msec 00:08:43.240 ----------------------------------------------------- 00:08:43.240 Suppressions used: 00:08:43.240 count bytes template 00:08:43.240 1 32 /usr/src/fio/parse.c 00:08:43.240 1 8 libtcmalloc_minimal.so 00:08:43.240 ----------------------------------------------------- 00:08:43.240 00:08:43.240 02:04:07 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:43.240 02:04:07 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:43.240 02:04:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:43.240 02:04:07 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:43.240 02:04:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:12.0' 00:08:43.240 02:04:07 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:43.240 02:04:07 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:43.240 02:04:07 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:43.240 02:04:07 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.12.0' --bs=4096 00:08:43.240 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:43.240 fio-3.35 00:08:43.240 Starting 1 thread 00:08:49.819 00:08:49.819 test: (groupid=0, jobs=1): err= 0: pid=66066: Sun Dec 15 02:04:13 2024 00:08:49.819 read: IOPS=18.5k, BW=72.3MiB/s (75.8MB/s)(145MiB/2001msec) 00:08:49.819 slat (usec): min=3, max=112, avg= 5.64, stdev= 3.24 00:08:49.819 clat (usec): min=330, max=10201, avg=3432.98, stdev=1393.33 00:08:49.819 lat (usec): min=337, max=10217, avg=3438.62, stdev=1394.81 00:08:49.819 clat percentiles (usec): 00:08:49.819 | 1.00th=[ 1631], 5.00th=[ 2114], 10.00th=[ 2278], 20.00th=[ 2442], 00:08:49.819 | 30.00th=[ 2573], 40.00th=[ 2704], 50.00th=[ 2868], 60.00th=[ 3097], 00:08:49.819 | 70.00th=[ 3621], 80.00th=[ 4621], 90.00th=[ 5538], 95.00th=[ 6325], 00:08:49.819 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9372], 99.95th=[ 9765], 00:08:49.819 | 99.99th=[10028] 00:08:49.819 bw ( KiB/s): min=68472, max=84608, per=100.00%, avg=74882.67, stdev=8563.46, samples=3 00:08:49.819 iops : min=17118, max=21152, avg=18720.67, stdev=2140.87, samples=3 00:08:49.819 write: IOPS=18.5k, BW=72.3MiB/s (75.8MB/s)(145MiB/2001msec); 0 zone resets 00:08:49.819 slat (nsec): min=3443, max=75637, avg=5739.27, stdev=3086.16 00:08:49.819 clat (usec): min=416, max=10173, avg=3454.07, stdev=1393.64 00:08:49.819 lat (usec): min=424, max=10222, avg=3459.81, stdev=1395.06 00:08:49.819 clat percentiles (usec): 00:08:49.819 | 1.00th=[ 1565], 5.00th=[ 2114], 10.00th=[ 2311], 20.00th=[ 2442], 00:08:49.819 | 30.00th=[ 2573], 40.00th=[ 2737], 50.00th=[ 2900], 60.00th=[ 3130], 00:08:49.819 | 70.00th=[ 3654], 80.00th=[ 4621], 90.00th=[ 5604], 95.00th=[ 6325], 00:08:49.819 | 99.00th=[ 7963], 99.50th=[ 8455], 99.90th=[ 9372], 99.95th=[ 9765], 00:08:49.819 | 99.99th=[10028] 00:08:49.819 bw ( KiB/s): min=68632, max=84608, per=100.00%, avg=74901.33, stdev=8524.65, samples=3 00:08:49.819 iops : min=17158, max=21152, avg=18725.33, stdev=2131.16, samples=3 00:08:49.819 lat (usec) : 500=0.01%, 750=0.02%, 1000=0.07% 00:08:49.819 lat (msec) : 2=2.71%, 4=71.02%, 10=26.15%, 20=0.01% 00:08:49.819 cpu : usr=98.90%, sys=0.00%, ctx=4, majf=0, minf=607 00:08:49.819 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:49.819 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:49.819 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:49.819 issued rwts: total=37031,37048,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:49.819 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:49.819 00:08:49.819 Run status group 0 (all jobs): 00:08:49.819 READ: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=145MiB (152MB), run=2001-2001msec 00:08:49.819 WRITE: bw=72.3MiB/s (75.8MB/s), 72.3MiB/s-72.3MiB/s (75.8MB/s-75.8MB/s), io=145MiB (152MB), run=2001-2001msec 00:08:49.819 ----------------------------------------------------- 00:08:49.819 Suppressions used: 00:08:49.819 count bytes template 00:08:49.819 1 32 /usr/src/fio/parse.c 00:08:49.819 1 8 libtcmalloc_minimal.so 00:08:49.819 ----------------------------------------------------- 00:08:49.819 00:08:49.819 02:04:13 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:49.819 02:04:13 nvme.nvme_fio -- nvme/nvme.sh@34 -- # for bdf in "${bdfs[@]}" 00:08:49.819 02:04:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # grep -qE '^Namespace ID:[0-9]+' 00:08:49.819 02:04:13 nvme.nvme_fio -- nvme/nvme.sh@35 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:49.819 02:04:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_nvme_identify -r 'trtype:PCIe traddr:0000:00:13.0' 00:08:49.819 02:04:14 nvme.nvme_fio -- nvme/nvme.sh@38 -- # grep -q 'Extended Data LBA' 00:08:49.819 02:04:14 nvme.nvme_fio -- nvme/nvme.sh@41 -- # bs=4096 00:08:49.819 02:04:14 nvme.nvme_fio -- nvme/nvme.sh@43 -- # fio_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1364 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1343 -- # local sanitizers 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1345 -- # shift 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1347 -- # local asan_lib= 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # grep libasan 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1351 -- # break 00:08:49.819 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_nvme' 00:08:49.820 02:04:14 nvme.nvme_fio -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/app/fio/nvme/example_config.fio '--filename=trtype=PCIe traddr=0000.00.13.0' --bs=4096 00:08:49.820 test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk, iodepth=128 00:08:49.820 fio-3.35 00:08:49.820 Starting 1 thread 00:08:59.798 00:08:59.798 test: (groupid=0, jobs=1): err= 0: pid=66121: Sun Dec 15 02:04:23 2024 00:08:59.798 read: IOPS=24.7k, BW=96.7MiB/s (101MB/s)(193MiB/2001msec) 00:08:59.798 slat (nsec): min=3377, max=74506, avg=4826.13, stdev=1919.06 00:08:59.798 clat (usec): min=206, max=7793, avg=2584.72, stdev=714.36 00:08:59.798 lat (usec): min=211, max=7805, avg=2589.55, stdev=715.66 00:08:59.798 clat percentiles (usec): 00:08:59.798 | 1.00th=[ 1762], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:59.798 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2409], 60.00th=[ 2442], 00:08:59.798 | 70.00th=[ 2474], 80.00th=[ 2507], 90.00th=[ 2769], 95.00th=[ 3982], 00:08:59.798 | 99.00th=[ 6325], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7504], 00:08:59.798 | 99.99th=[ 7635] 00:08:59.798 bw ( KiB/s): min=95952, max=99112, per=98.08%, avg=97077.33, stdev=1765.38, samples=3 00:08:59.798 iops : min=23988, max=24778, avg=24269.33, stdev=441.34, samples=3 00:08:59.798 write: IOPS=24.6k, BW=96.0MiB/s (101MB/s)(192MiB/2001msec); 0 zone resets 00:08:59.798 slat (nsec): min=3507, max=51356, avg=5085.56, stdev=1853.56 00:08:59.798 clat (usec): min=198, max=7808, avg=2583.49, stdev=715.84 00:08:59.798 lat (usec): min=203, max=7821, avg=2588.58, stdev=717.11 00:08:59.798 clat percentiles (usec): 00:08:59.798 | 1.00th=[ 1729], 5.00th=[ 2180], 10.00th=[ 2311], 20.00th=[ 2343], 00:08:59.798 | 30.00th=[ 2376], 40.00th=[ 2409], 50.00th=[ 2409], 60.00th=[ 2442], 00:08:59.798 | 70.00th=[ 2474], 80.00th=[ 2507], 90.00th=[ 2769], 95.00th=[ 3982], 00:08:59.798 | 99.00th=[ 6390], 99.50th=[ 6783], 99.90th=[ 7373], 99.95th=[ 7570], 00:08:59.798 | 99.99th=[ 7701] 00:08:59.798 bw ( KiB/s): min=95472, max=100144, per=98.79%, avg=97157.33, stdev=2593.65, samples=3 00:08:59.798 iops : min=23868, max=25036, avg=24289.33, stdev=648.41, samples=3 00:08:59.798 lat (usec) : 250=0.01%, 500=0.01%, 750=0.01%, 1000=0.03% 00:08:59.798 lat (msec) : 2=2.33%, 4=92.72%, 10=4.90% 00:08:59.798 cpu : usr=99.40%, sys=0.00%, ctx=3, majf=0, minf=606 00:08:59.798 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:08:59.798 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:08:59.798 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:08:59.798 issued rwts: total=49511,49197,0,0 short=0,0,0,0 dropped=0,0,0,0 00:08:59.798 latency : target=0, window=0, percentile=100.00%, depth=128 00:08:59.798 00:08:59.798 Run status group 0 (all jobs): 00:08:59.798 READ: bw=96.7MiB/s (101MB/s), 96.7MiB/s-96.7MiB/s (101MB/s-101MB/s), io=193MiB (203MB), run=2001-2001msec 00:08:59.798 WRITE: bw=96.0MiB/s (101MB/s), 96.0MiB/s-96.0MiB/s (101MB/s-101MB/s), io=192MiB (202MB), run=2001-2001msec 00:08:59.798 ----------------------------------------------------- 00:08:59.798 Suppressions used: 00:08:59.798 count bytes template 00:08:59.798 1 32 /usr/src/fio/parse.c 00:08:59.798 1 8 libtcmalloc_minimal.so 00:08:59.798 ----------------------------------------------------- 00:08:59.798 00:08:59.798 ************************************ 00:08:59.798 END TEST nvme_fio 00:08:59.798 ************************************ 00:08:59.798 02:04:24 nvme.nvme_fio -- nvme/nvme.sh@44 -- # ran_fio=true 00:08:59.798 02:04:24 nvme.nvme_fio -- nvme/nvme.sh@46 -- # true 00:08:59.798 00:08:59.798 real 0m30.258s 00:08:59.799 user 0m18.308s 00:08:59.799 sys 0m21.696s 00:08:59.799 02:04:24 nvme.nvme_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.799 02:04:24 nvme.nvme_fio -- common/autotest_common.sh@10 -- # set +x 00:08:59.799 ************************************ 00:08:59.799 END TEST nvme 00:08:59.799 ************************************ 00:08:59.799 00:08:59.799 real 1m39.172s 00:08:59.799 user 3m38.197s 00:08:59.799 sys 0m31.944s 00:08:59.799 02:04:24 nvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:08:59.799 02:04:24 nvme -- common/autotest_common.sh@10 -- # set +x 00:08:59.799 02:04:24 -- spdk/autotest.sh@213 -- # [[ 0 -eq 1 ]] 00:08:59.799 02:04:24 -- spdk/autotest.sh@217 -- # run_test nvme_scc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:59.799 02:04:24 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:08:59.799 02:04:24 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:08:59.799 02:04:24 -- common/autotest_common.sh@10 -- # set +x 00:08:59.799 ************************************ 00:08:59.799 START TEST nvme_scc 00:08:59.799 ************************************ 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_scc.sh 00:08:59.799 * Looking for test storage... 00:08:59.799 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1711 -- # lcov --version 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@336 -- # IFS=.-: 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@336 -- # read -ra ver1 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@337 -- # IFS=.-: 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@337 -- # read -ra ver2 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@338 -- # local 'op=<' 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@340 -- # ver1_l=2 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@341 -- # ver2_l=1 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@344 -- # case "$op" in 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@345 -- # : 1 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@364 -- # (( v = 0 )) 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@365 -- # decimal 1 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@353 -- # local d=1 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@355 -- # echo 1 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@365 -- # ver1[v]=1 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@366 -- # decimal 2 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@353 -- # local d=2 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@355 -- # echo 2 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@366 -- # ver2[v]=2 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@368 -- # return 0 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:08:59.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.799 --rc genhtml_branch_coverage=1 00:08:59.799 --rc genhtml_function_coverage=1 00:08:59.799 --rc genhtml_legend=1 00:08:59.799 --rc geninfo_all_blocks=1 00:08:59.799 --rc geninfo_unexecuted_blocks=1 00:08:59.799 00:08:59.799 ' 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:08:59.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.799 --rc genhtml_branch_coverage=1 00:08:59.799 --rc genhtml_function_coverage=1 00:08:59.799 --rc genhtml_legend=1 00:08:59.799 --rc geninfo_all_blocks=1 00:08:59.799 --rc geninfo_unexecuted_blocks=1 00:08:59.799 00:08:59.799 ' 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:08:59.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.799 --rc genhtml_branch_coverage=1 00:08:59.799 --rc genhtml_function_coverage=1 00:08:59.799 --rc genhtml_legend=1 00:08:59.799 --rc geninfo_all_blocks=1 00:08:59.799 --rc geninfo_unexecuted_blocks=1 00:08:59.799 00:08:59.799 ' 00:08:59.799 02:04:24 nvme_scc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:08:59.799 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:08:59.799 --rc genhtml_branch_coverage=1 00:08:59.799 --rc genhtml_function_coverage=1 00:08:59.799 --rc genhtml_legend=1 00:08:59.799 --rc geninfo_all_blocks=1 00:08:59.799 --rc geninfo_unexecuted_blocks=1 00:08:59.799 00:08:59.799 ' 00:08:59.799 02:04:24 nvme_scc -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@15 -- # shopt -s extglob 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:08:59.799 02:04:24 nvme_scc -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:08:59.799 02:04:24 nvme_scc -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.799 02:04:24 nvme_scc -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.799 02:04:24 nvme_scc -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.799 02:04:24 nvme_scc -- paths/export.sh@5 -- # export PATH 00:08:59.799 02:04:24 nvme_scc -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@10 -- # ctrls=() 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@10 -- # declare -A ctrls 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@11 -- # nvmes=() 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@11 -- # declare -A nvmes 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@12 -- # bdfs=() 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@12 -- # declare -A bdfs 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:08:59.799 02:04:24 nvme_scc -- nvme/functions.sh@14 -- # nvme_name= 00:08:59.799 02:04:24 nvme_scc -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:08:59.799 02:04:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # uname 00:08:59.799 02:04:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ Linux == Linux ]] 00:08:59.799 02:04:24 nvme_scc -- nvme/nvme_scc.sh@12 -- # [[ QEMU == QEMU ]] 00:08:59.799 02:04:24 nvme_scc -- nvme/nvme_scc.sh@14 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:00.059 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:00.318 Waiting for block devices as requested 00:09:00.318 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.318 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.318 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:00.318 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:05.616 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:05.616 02:04:30 nvme_scc -- nvme/nvme_scc.sh@16 -- # scan_nvme_ctrls 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:05.616 02:04:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:05.616 02:04:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:05.616 02:04:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.616 02:04:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.616 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.617 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:05.618 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:05.619 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.620 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.621 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:05.622 02:04:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:05.622 02:04:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:05.622 02:04:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.622 02:04:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.622 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.623 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.624 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:05.625 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.626 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.627 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:05.628 02:04:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:05.628 02:04:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:05.628 02:04:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.628 02:04:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:05.628 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.629 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.630 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.899 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.900 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.901 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.902 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:05.903 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:05.904 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.905 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.906 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:05.907 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:05.908 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:05.909 02:04:30 nvme_scc -- scripts/common.sh@18 -- # local i 00:09:05.909 02:04:30 nvme_scc -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:05.909 02:04:30 nvme_scc -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:05.909 02:04:30 nvme_scc -- scripts/common.sh@27 -- # return 0 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@18 -- # shift 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:05.909 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.910 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.911 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # IFS=: 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@21 -- # read -r reg val 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:05.912 02:04:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # get_ctrl_with_feature scc 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@204 -- # local _ctrls feature=scc 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@206 -- # get_ctrls_with_feature scc 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@194 -- # local ctrl feature=scc 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@196 -- # type -t ctrl_has_scc 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme1 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme1 oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme1 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme1 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme1 oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme1 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme0 oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme0 oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme0 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme3 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme3 oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme3 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme3 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme3 oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme3 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@199 -- # ctrl_has_scc nvme2 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@184 -- # local ctrl=nvme2 oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@186 -- # get_oncs nvme2 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@171 -- # local ctrl=nvme2 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@172 -- # get_nvme_ctrl_feature nvme2 oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=oncs 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@75 -- # [[ -n 0x15d ]] 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@76 -- # echo 0x15d 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@186 -- # oncs=0x15d 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@188 -- # (( oncs & 1 << 8 )) 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@199 -- # echo nvme2 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@207 -- # (( 4 > 0 )) 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@208 -- # echo nvme1 00:09:05.912 02:04:30 nvme_scc -- nvme/functions.sh@209 -- # return 0 00:09:05.912 02:04:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # ctrl=nvme1 00:09:05.912 02:04:30 nvme_scc -- nvme/nvme_scc.sh@17 -- # bdf=0000:00:10.0 00:09:05.912 02:04:30 nvme_scc -- nvme/nvme_scc.sh@19 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:06.485 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:06.746 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.007 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.007 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.007 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:07.007 02:04:31 nvme_scc -- nvme/nvme_scc.sh@21 -- # run_test nvme_simple_copy /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:07.007 02:04:31 nvme_scc -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:07.007 02:04:31 nvme_scc -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.007 02:04:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:07.007 ************************************ 00:09:07.007 START TEST nvme_simple_copy 00:09:07.007 ************************************ 00:09:07.007 02:04:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/simple_copy/simple_copy -r 'trtype:pcie traddr:0000:00:10.0' 00:09:07.268 Initializing NVMe Controllers 00:09:07.268 Attaching to 0000:00:10.0 00:09:07.268 Controller supports SCC. Attached to 0000:00:10.0 00:09:07.268 Namespace ID: 1 size: 6GB 00:09:07.268 Initialization complete. 00:09:07.268 00:09:07.268 Controller QEMU NVMe Ctrl (12340 ) 00:09:07.268 Controller PCI vendor:6966 PCI subsystem vendor:6900 00:09:07.268 Namespace Block Size:4096 00:09:07.268 Writing LBAs 0 to 63 with Random Data 00:09:07.268 Copied LBAs from 0 - 63 to the Destination LBA 256 00:09:07.268 LBAs matching Written Data: 64 00:09:07.268 00:09:07.268 real 0m0.275s 00:09:07.268 user 0m0.118s 00:09:07.268 sys 0m0.055s 00:09:07.268 02:04:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.268 02:04:31 nvme_scc.nvme_simple_copy -- common/autotest_common.sh@10 -- # set +x 00:09:07.268 ************************************ 00:09:07.268 END TEST nvme_simple_copy 00:09:07.268 ************************************ 00:09:07.268 00:09:07.268 real 0m7.729s 00:09:07.268 user 0m1.132s 00:09:07.268 sys 0m1.327s 00:09:07.268 02:04:31 nvme_scc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:07.268 ************************************ 00:09:07.268 END TEST nvme_scc 00:09:07.268 ************************************ 00:09:07.268 02:04:31 nvme_scc -- common/autotest_common.sh@10 -- # set +x 00:09:07.268 02:04:32 -- spdk/autotest.sh@219 -- # [[ 0 -eq 1 ]] 00:09:07.268 02:04:32 -- spdk/autotest.sh@222 -- # [[ 0 -eq 1 ]] 00:09:07.268 02:04:32 -- spdk/autotest.sh@225 -- # [[ '' -eq 1 ]] 00:09:07.268 02:04:32 -- spdk/autotest.sh@228 -- # [[ 1 -eq 1 ]] 00:09:07.268 02:04:32 -- spdk/autotest.sh@229 -- # run_test nvme_fdp test/nvme/nvme_fdp.sh 00:09:07.268 02:04:32 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:07.268 02:04:32 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:07.268 02:04:32 -- common/autotest_common.sh@10 -- # set +x 00:09:07.268 ************************************ 00:09:07.268 START TEST nvme_fdp 00:09:07.268 ************************************ 00:09:07.268 02:04:32 nvme_fdp -- common/autotest_common.sh@1129 -- # test/nvme/nvme_fdp.sh 00:09:07.529 * Looking for test storage... 00:09:07.529 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:07.529 02:04:32 nvme_fdp -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:07.529 02:04:32 nvme_fdp -- common/autotest_common.sh@1711 -- # lcov --version 00:09:07.529 02:04:32 nvme_fdp -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:07.529 02:04:32 nvme_fdp -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@336 -- # IFS=.-: 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@336 -- # read -ra ver1 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@337 -- # IFS=.-: 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@337 -- # read -ra ver2 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@338 -- # local 'op=<' 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@340 -- # ver1_l=2 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@341 -- # ver2_l=1 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@344 -- # case "$op" in 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@345 -- # : 1 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@365 -- # decimal 1 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@353 -- # local d=1 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@355 -- # echo 1 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@365 -- # ver1[v]=1 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@366 -- # decimal 2 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@353 -- # local d=2 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@355 -- # echo 2 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@366 -- # ver2[v]=2 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:07.529 02:04:32 nvme_fdp -- scripts/common.sh@368 -- # return 0 00:09:07.529 02:04:32 nvme_fdp -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:07.529 02:04:32 nvme_fdp -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:07.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.529 --rc genhtml_branch_coverage=1 00:09:07.529 --rc genhtml_function_coverage=1 00:09:07.529 --rc genhtml_legend=1 00:09:07.529 --rc geninfo_all_blocks=1 00:09:07.529 --rc geninfo_unexecuted_blocks=1 00:09:07.529 00:09:07.529 ' 00:09:07.529 02:04:32 nvme_fdp -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:07.529 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.529 --rc genhtml_branch_coverage=1 00:09:07.529 --rc genhtml_function_coverage=1 00:09:07.529 --rc genhtml_legend=1 00:09:07.530 --rc geninfo_all_blocks=1 00:09:07.530 --rc geninfo_unexecuted_blocks=1 00:09:07.530 00:09:07.530 ' 00:09:07.530 02:04:32 nvme_fdp -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:07.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.530 --rc genhtml_branch_coverage=1 00:09:07.530 --rc genhtml_function_coverage=1 00:09:07.530 --rc genhtml_legend=1 00:09:07.530 --rc geninfo_all_blocks=1 00:09:07.530 --rc geninfo_unexecuted_blocks=1 00:09:07.530 00:09:07.530 ' 00:09:07.530 02:04:32 nvme_fdp -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:07.530 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:07.530 --rc genhtml_branch_coverage=1 00:09:07.530 --rc genhtml_function_coverage=1 00:09:07.530 --rc genhtml_legend=1 00:09:07.530 --rc geninfo_all_blocks=1 00:09:07.530 --rc geninfo_unexecuted_blocks=1 00:09:07.530 00:09:07.530 ' 00:09:07.530 02:04:32 nvme_fdp -- cuse/common.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@7 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/nvme/functions.sh 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common/nvme/../../../ 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@7 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@8 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:09:07.530 02:04:32 nvme_fdp -- scripts/common.sh@15 -- # shopt -s extglob 00:09:07.530 02:04:32 nvme_fdp -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:09:07.530 02:04:32 nvme_fdp -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:09:07.530 02:04:32 nvme_fdp -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:09:07.530 02:04:32 nvme_fdp -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.530 02:04:32 nvme_fdp -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.530 02:04:32 nvme_fdp -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.530 02:04:32 nvme_fdp -- paths/export.sh@5 -- # export PATH 00:09:07.530 02:04:32 nvme_fdp -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@10 -- # ctrls=() 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@10 -- # declare -A ctrls 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@11 -- # nvmes=() 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@11 -- # declare -A nvmes 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@12 -- # bdfs=() 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@12 -- # declare -A bdfs 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@13 -- # ordered_ctrls=() 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@13 -- # declare -a ordered_ctrls 00:09:07.530 02:04:32 nvme_fdp -- nvme/functions.sh@14 -- # nvme_name= 00:09:07.530 02:04:32 nvme_fdp -- cuse/common.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:07.530 02:04:32 nvme_fdp -- nvme/nvme_fdp.sh@10 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:07.791 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:08.052 Waiting for block devices as requested 00:09:08.052 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.052 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.052 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:08.313 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:13.615 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:13.615 02:04:37 nvme_fdp -- nvme/nvme_fdp.sh@12 -- # scan_nvme_ctrls 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@45 -- # local ctrl ctrl_dev reg val ns pci 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme0 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:11.0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:11.0 00:09:13.615 02:04:37 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:13.615 02:04:37 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:13.615 02:04:37 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.615 02:04:37 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme0 id-ctrl /dev/nvme0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0 reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0=()' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vid]="0x1b36"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vid]=0x1b36 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ssvid]="0x1af4"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ssvid]=0x1af4 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12341 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sn]="12341 "' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sn]='12341 ' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mn]="QEMU NVMe Ctrl "' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mn]='QEMU NVMe Ctrl ' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fr]="8.0.0 "' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fr]='8.0.0 ' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rab]="6"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rab]=6 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ieee]="525400"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ieee]=525400 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cmic]="0"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cmic]=0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mdts]="7"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mdts]=7 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntlid]="0"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntlid]=0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ver]="0x10400"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ver]=0x10400 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3r]="0"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3r]=0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rtd3e]="0"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rtd3e]=0 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oaes]="0x100"' 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oaes]=0x100 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.615 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ctratt]="0x8000"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ctratt]=0x8000 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rrls]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rrls]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cntrltype]="1"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cntrltype]=1 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt1]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt1]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt2]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt2]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[crdt3]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[crdt3]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nvmsr]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nvmsr]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwci]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwci]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mec]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mec]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oacs]="0x12a"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oacs]=0x12a 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acl]="3"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acl]=3 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[aerl]="3"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[aerl]=3 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[frmw]="0x3"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[frmw]=0x3 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[lpa]="0x7"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[lpa]=0x7 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[elpe]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[elpe]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[npss]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[npss]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[avscc]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[avscc]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[apsta]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[apsta]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[wctemp]="343"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[wctemp]=343 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cctemp]="373"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cctemp]=373 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mtfa]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mtfa]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmpre]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmpre]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmin]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmin]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[tnvmcap]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[tnvmcap]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[unvmcap]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[unvmcap]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rpmbs]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rpmbs]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[edstt]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[edstt]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[dsto]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[dsto]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fwug]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fwug]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[kas]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[kas]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hctma]="0"' 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hctma]=0 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.616 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mntmt]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mntmt]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mxtmt]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mxtmt]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sanicap]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sanicap]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmminds]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmminds]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[hmmaxd]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[hmmaxd]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nsetidmax]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nsetidmax]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[endgidmax]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[endgidmax]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anatt]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anatt]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anacap]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anacap]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[anagrpmax]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[anagrpmax]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nanagrpid]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nanagrpid]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[pels]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[pels]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[domainid]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[domainid]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[megcap]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[megcap]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sqes]="0x66"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sqes]=0x66 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[cqes]="0x44"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[cqes]=0x44 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcmd]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcmd]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nn]="256"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nn]=256 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[oncs]="0x15d"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[oncs]=0x15d 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fuses]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fuses]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fna]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fna]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[vwc]="0x7"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[vwc]=0x7 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awun]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awun]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[awupf]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[awupf]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icsvscc]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icsvscc]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[nwpc]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[nwpc]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[acwu]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[acwu]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ocfs]="0x3"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ocfs]=0x3 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[sgls]="0x1"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[sgls]=0x1 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[mnan]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[mnan]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxdna]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxdna]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[maxcna]="0"' 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[maxcna]=0 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12341 ]] 00:09:13.617 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[subnqn]="nqn.2019-08.org.qemu:12341"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[subnqn]=nqn.2019-08.org.qemu:12341 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ioccsz]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ioccsz]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[iorcsz]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[iorcsz]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[icdoff]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[icdoff]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[fcatt]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[fcatt]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[msdbd]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[msdbd]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ofcs]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ofcs]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0[active_power_workload]="-"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # nvme0[active_power_workload]=- 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme0_ns 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/ng0n1 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng0n1 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng0n1 id-ns /dev/ng0n1 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng0n1 reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng0n1=()' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng0n1 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsze]="0x140000"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsze]=0x140000 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[ncap]="0x140000"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[ncap]=0x140000 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nuse]="0x140000"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nuse]=0x140000 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsfeat]="0x14"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsfeat]=0x14 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nlbaf]="7"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nlbaf]=7 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[flbas]="0x4"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[flbas]=0x4 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mc]="0x3"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mc]=0x3 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dpc]="0x1f"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dpc]=0x1f 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dps]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dps]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nmic]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nmic]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[rescap]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[rescap]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[fpi]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[fpi]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[dlfeat]="1"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[dlfeat]=1 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawun]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawun]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nawupf]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nawupf]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nacwu]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nacwu]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabsn]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabsn]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabo]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabo]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nabspf]="0"' 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nabspf]=0 00:09:13.618 02:04:37 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.618 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.618 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[noiob]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[noiob]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmcap]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmcap]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwg]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwg]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npwa]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npwa]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npdg]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npdg]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[npda]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[npda]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nows]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nows]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mssrl]="128"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mssrl]=128 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[mcl]="128"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[mcl]=128 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[msrc]="127"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[msrc]=127 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nulbaf]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nulbaf]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[anagrpid]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[anagrpid]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nsattr]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nsattr]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nvmsetid]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nvmsetid]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[endgid]="0"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[endgid]=0 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[nguid]="00000000000000000000000000000000"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[nguid]=00000000000000000000000000000000 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[eui64]="0000000000000000"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[eui64]=0000000000000000 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng0n1 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme0/nvme0n1 ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme0n1 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme0n1 id-ns /dev/nvme0n1 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme0n1 reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme0n1=()' 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme0n1 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.619 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsze]="0x140000"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsze]=0x140000 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[ncap]="0x140000"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[ncap]=0x140000 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x140000 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nuse]="0x140000"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nuse]=0x140000 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsfeat]="0x14"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsfeat]=0x14 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nlbaf]="7"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nlbaf]=7 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[flbas]="0x4"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[flbas]=0x4 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mc]="0x3"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mc]=0x3 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dpc]="0x1f"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dpc]=0x1f 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dps]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dps]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nmic]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nmic]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[rescap]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[rescap]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[fpi]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[fpi]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[dlfeat]="1"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[dlfeat]=1 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawun]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawun]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nawupf]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nawupf]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nacwu]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nacwu]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabsn]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabsn]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabo]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabo]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nabspf]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nabspf]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[noiob]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[noiob]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmcap]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmcap]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwg]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwg]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npwa]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npwa]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npdg]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npdg]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[npda]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[npda]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nows]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nows]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mssrl]="128"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mssrl]=128 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[mcl]="128"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[mcl]=128 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[msrc]="127"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[msrc]=127 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nulbaf]="0"' 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nulbaf]=0 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.620 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[anagrpid]="0"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[anagrpid]=0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nsattr]="0"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nsattr]=0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nvmsetid]="0"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nvmsetid]=0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[endgid]="0"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[endgid]=0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[nguid]="00000000000000000000000000000000"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[nguid]=00000000000000000000000000000000 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[eui64]="0000000000000000"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[eui64]=0000000000000000 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme0n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme0n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme0n1 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme0_ns 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:11.0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme1 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:10.0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:10.0 00:09:13.621 02:04:38 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:13.621 02:04:38 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:13.621 02:04:38 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.621 02:04:38 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme1 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme1 id-ctrl /dev/nvme1 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1 reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1=()' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme1 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vid]="0x1b36"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vid]=0x1b36 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ssvid]="0x1af4"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ssvid]=0x1af4 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12340 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sn]="12340 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sn]='12340 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mn]="QEMU NVMe Ctrl "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mn]='QEMU NVMe Ctrl ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fr]="8.0.0 "' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fr]='8.0.0 ' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rab]="6"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rab]=6 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ieee]="525400"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ieee]=525400 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cmic]="0"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cmic]=0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mdts]="7"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mdts]=7 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntlid]="0"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntlid]=0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ver]="0x10400"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ver]=0x10400 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3r]="0"' 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3r]=0 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.621 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rtd3e]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rtd3e]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oaes]="0x100"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oaes]=0x100 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ctratt]="0x8000"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ctratt]=0x8000 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rrls]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rrls]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cntrltype]="1"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cntrltype]=1 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt1]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt1]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt2]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt2]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[crdt3]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[crdt3]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nvmsr]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nvmsr]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwci]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwci]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mec]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mec]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oacs]="0x12a"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oacs]=0x12a 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acl]="3"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acl]=3 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[aerl]="3"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[aerl]=3 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[frmw]="0x3"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[frmw]=0x3 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[lpa]="0x7"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[lpa]=0x7 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[elpe]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[elpe]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[npss]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[npss]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[avscc]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[avscc]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[apsta]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[apsta]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[wctemp]="343"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[wctemp]=343 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cctemp]="373"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cctemp]=373 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mtfa]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mtfa]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmpre]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmpre]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmin]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmin]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[tnvmcap]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[tnvmcap]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[unvmcap]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[unvmcap]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rpmbs]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rpmbs]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[edstt]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[edstt]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[dsto]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[dsto]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fwug]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fwug]=0 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[kas]="0"' 00:09:13.622 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[kas]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hctma]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hctma]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mntmt]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mntmt]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mxtmt]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mxtmt]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sanicap]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sanicap]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmminds]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmminds]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[hmmaxd]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[hmmaxd]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nsetidmax]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nsetidmax]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[endgidmax]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[endgidmax]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anatt]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anatt]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anacap]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anacap]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[anagrpmax]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[anagrpmax]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nanagrpid]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nanagrpid]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[pels]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[pels]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[domainid]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[domainid]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[megcap]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[megcap]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sqes]="0x66"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sqes]=0x66 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[cqes]="0x44"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[cqes]=0x44 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcmd]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcmd]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nn]="256"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nn]=256 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[oncs]="0x15d"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[oncs]=0x15d 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fuses]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fuses]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fna]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fna]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[vwc]="0x7"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[vwc]=0x7 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awun]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awun]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[awupf]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[awupf]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icsvscc]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icsvscc]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[nwpc]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[nwpc]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[acwu]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[acwu]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ocfs]="0x3"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ocfs]=0x3 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[sgls]="0x1"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[sgls]=0x1 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[mnan]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[mnan]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxdna]="0"' 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxdna]=0 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.623 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[maxcna]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[maxcna]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12340 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[subnqn]="nqn.2019-08.org.qemu:12340"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[subnqn]=nqn.2019-08.org.qemu:12340 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ioccsz]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ioccsz]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[iorcsz]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[iorcsz]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[icdoff]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[icdoff]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[fcatt]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[fcatt]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[msdbd]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[msdbd]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ofcs]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ofcs]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1[active_power_workload]="-"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1[active_power_workload]=- 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme1_ns 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/ng1n1 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng1n1 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng1n1 id-ns /dev/ng1n1 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng1n1 reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng1n1=()' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng1n1 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsze]="0x17a17a"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsze]=0x17a17a 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[ncap]="0x17a17a"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[ncap]=0x17a17a 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nuse]="0x17a17a"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nuse]=0x17a17a 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsfeat]="0x14"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsfeat]=0x14 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nlbaf]="7"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nlbaf]=7 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[flbas]="0x7"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[flbas]=0x7 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mc]="0x3"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mc]=0x3 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dpc]="0x1f"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dpc]=0x1f 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dps]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dps]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nmic]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nmic]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[rescap]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[rescap]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[fpi]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[fpi]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[dlfeat]="1"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[dlfeat]=1 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawun]="0"' 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawun]=0 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.624 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nawupf]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nawupf]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nacwu]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nacwu]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabsn]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabsn]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabo]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabo]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nabspf]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nabspf]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[noiob]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[noiob]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmcap]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmcap]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwg]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwg]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npwa]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npwa]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npdg]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npdg]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[npda]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[npda]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nows]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nows]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mssrl]="128"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mssrl]=128 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[mcl]="128"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[mcl]=128 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[msrc]="127"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[msrc]=127 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nulbaf]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nulbaf]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[anagrpid]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[anagrpid]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nsattr]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nsattr]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nvmsetid]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nvmsetid]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[endgid]="0"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[endgid]=0 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[nguid]="00000000000000000000000000000000"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[nguid]=00000000000000000000000000000000 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[eui64]="0000000000000000"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[eui64]=0000000000000000 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng1n1 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme1/nvme1n1 ]] 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme1n1 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme1n1 id-ns /dev/nvme1n1 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme1n1 reg val 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.625 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme1n1=()' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme1n1 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsze]="0x17a17a"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsze]=0x17a17a 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[ncap]="0x17a17a"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[ncap]=0x17a17a 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x17a17a ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nuse]="0x17a17a"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nuse]=0x17a17a 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsfeat]="0x14"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsfeat]=0x14 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nlbaf]="7"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nlbaf]=7 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[flbas]="0x7"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[flbas]=0x7 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mc]="0x3"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mc]=0x3 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dpc]="0x1f"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dpc]=0x1f 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dps]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dps]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nmic]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nmic]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[rescap]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[rescap]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[fpi]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[fpi]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[dlfeat]="1"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[dlfeat]=1 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawun]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawun]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nawupf]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nawupf]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nacwu]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nacwu]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabsn]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabsn]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabo]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabo]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nabspf]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nabspf]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[noiob]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[noiob]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmcap]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmcap]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwg]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwg]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npwa]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npwa]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npdg]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npdg]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[npda]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[npda]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nows]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nows]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mssrl]="128"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mssrl]=128 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[mcl]="128"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[mcl]=128 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[msrc]="127"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[msrc]=127 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nulbaf]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nulbaf]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[anagrpid]="0"' 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[anagrpid]=0 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.626 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nsattr]="0"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nsattr]=0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nvmsetid]="0"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nvmsetid]=0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[endgid]="0"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[endgid]=0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[nguid]="00000000000000000000000000000000"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[nguid]=00000000000000000000000000000000 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[eui64]="0000000000000000"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[eui64]=0000000000000000 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf4]="ms:0 lbads:12 rp:0 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf4]='ms:0 lbads:12 rp:0 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 (in use) ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme1n1[lbaf7]="ms:64 lbads:12 rp:0 (in use)"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme1n1[lbaf7]='ms:64 lbads:12 rp:0 (in use)' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme1n1 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme1 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme1_ns 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:10.0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme1 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme2 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:12.0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:12.0 00:09:13.627 02:04:38 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:13.627 02:04:38 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:13.627 02:04:38 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.627 02:04:38 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme2 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme2 id-ctrl /dev/nvme2 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2 reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2=()' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme2 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vid]="0x1b36"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vid]=0x1b36 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ssvid]="0x1af4"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ssvid]=0x1af4 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12342 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sn]="12342 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sn]='12342 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mn]="QEMU NVMe Ctrl "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mn]='QEMU NVMe Ctrl ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fr]="8.0.0 "' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fr]='8.0.0 ' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rab]="6"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rab]=6 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ieee]="525400"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ieee]=525400 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cmic]="0"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cmic]=0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mdts]="7"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mdts]=7 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntlid]="0"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntlid]=0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ver]="0x10400"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ver]=0x10400 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3r]="0"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3r]=0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rtd3e]="0"' 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rtd3e]=0 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.627 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oaes]="0x100"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oaes]=0x100 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x8000 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ctratt]="0x8000"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ctratt]=0x8000 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rrls]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rrls]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cntrltype]="1"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cntrltype]=1 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt1]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt1]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt2]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt2]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[crdt3]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[crdt3]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nvmsr]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nvmsr]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwci]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwci]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mec]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mec]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oacs]="0x12a"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oacs]=0x12a 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acl]="3"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acl]=3 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[aerl]="3"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[aerl]=3 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[frmw]="0x3"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[frmw]=0x3 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[lpa]="0x7"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[lpa]=0x7 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[elpe]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[elpe]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[npss]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[npss]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[avscc]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[avscc]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[apsta]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[apsta]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[wctemp]="343"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[wctemp]=343 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cctemp]="373"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cctemp]=373 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mtfa]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mtfa]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmpre]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmpre]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmin]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmin]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[tnvmcap]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[tnvmcap]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[unvmcap]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[unvmcap]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rpmbs]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rpmbs]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[edstt]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[edstt]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[dsto]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[dsto]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fwug]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fwug]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[kas]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[kas]=0 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hctma]="0"' 00:09:13.628 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hctma]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mntmt]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mntmt]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mxtmt]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mxtmt]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sanicap]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sanicap]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmminds]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmminds]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[hmmaxd]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[hmmaxd]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nsetidmax]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nsetidmax]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[endgidmax]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[endgidmax]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anatt]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anatt]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anacap]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anacap]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[anagrpmax]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[anagrpmax]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nanagrpid]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nanagrpid]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[pels]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[pels]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[domainid]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[domainid]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[megcap]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[megcap]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sqes]="0x66"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sqes]=0x66 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[cqes]="0x44"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[cqes]=0x44 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcmd]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcmd]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nn]="256"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nn]=256 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[oncs]="0x15d"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[oncs]=0x15d 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fuses]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fuses]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fna]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fna]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[vwc]="0x7"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[vwc]=0x7 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awun]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awun]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[awupf]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[awupf]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icsvscc]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icsvscc]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[nwpc]="0"' 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[nwpc]=0 00:09:13.629 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[acwu]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[acwu]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ocfs]="0x3"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ocfs]=0x3 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[sgls]="0x1"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[sgls]=0x1 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[mnan]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[mnan]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxdna]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxdna]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[maxcna]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[maxcna]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:12342 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[subnqn]="nqn.2019-08.org.qemu:12342"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[subnqn]=nqn.2019-08.org.qemu:12342 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ioccsz]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ioccsz]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[iorcsz]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[iorcsz]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[icdoff]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[icdoff]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[fcatt]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[fcatt]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[msdbd]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[msdbd]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ofcs]="0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ofcs]=0 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2[active_power_workload]="-"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2[active_power_workload]=- 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme2_ns 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n1 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n1 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n1 id-ns /dev/ng2n1 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n1 reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n1=()' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n1 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsze]="0x100000"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsze]=0x100000 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[ncap]="0x100000"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[ncap]=0x100000 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nuse]="0x100000"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nuse]=0x100000 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsfeat]="0x14"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsfeat]=0x14 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nlbaf]="7"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nlbaf]=7 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[flbas]="0x4"' 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[flbas]=0x4 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.630 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mc]="0x3"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mc]=0x3 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dpc]="0x1f"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dpc]=0x1f 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dps]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dps]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nmic]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nmic]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[rescap]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[rescap]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[fpi]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[fpi]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[dlfeat]="1"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[dlfeat]=1 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawun]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawun]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nawupf]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nawupf]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nacwu]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nacwu]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabsn]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabsn]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabo]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabo]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nabspf]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nabspf]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[noiob]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[noiob]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmcap]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmcap]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwg]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwg]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npwa]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npwa]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npdg]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npdg]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[npda]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[npda]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nows]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nows]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mssrl]="128"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mssrl]=128 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[mcl]="128"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[mcl]=128 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[msrc]="127"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[msrc]=127 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nulbaf]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nulbaf]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[anagrpid]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[anagrpid]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nsattr]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nsattr]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nvmsetid]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nvmsetid]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[endgid]="0"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[endgid]=0 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[nguid]="00000000000000000000000000000000"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[nguid]=00000000000000000000000000000000 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[eui64]="0000000000000000"' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[eui64]=0000000000000000 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.631 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n1 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n2 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n2 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n2 id-ns /dev/ng2n2 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n2 reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n2=()' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n2 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsze]="0x100000"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsze]=0x100000 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[ncap]="0x100000"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[ncap]=0x100000 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nuse]="0x100000"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nuse]=0x100000 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsfeat]="0x14"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsfeat]=0x14 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nlbaf]="7"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nlbaf]=7 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[flbas]="0x4"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[flbas]=0x4 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mc]="0x3"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mc]=0x3 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dpc]="0x1f"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dpc]=0x1f 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dps]="0"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dps]=0 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nmic]="0"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nmic]=0 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[rescap]="0"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[rescap]=0 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[fpi]="0"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[fpi]=0 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[dlfeat]="1"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[dlfeat]=1 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawun]="0"' 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawun]=0 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.632 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nawupf]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nawupf]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nacwu]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nacwu]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabsn]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabsn]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabo]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabo]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nabspf]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nabspf]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[noiob]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[noiob]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmcap]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmcap]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwg]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwg]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npwa]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npwa]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npdg]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npdg]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[npda]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[npda]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nows]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nows]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mssrl]="128"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mssrl]=128 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[mcl]="128"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[mcl]=128 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[msrc]="127"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[msrc]=127 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nulbaf]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nulbaf]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[anagrpid]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[anagrpid]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nsattr]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nsattr]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nvmsetid]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nvmsetid]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[endgid]="0"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[endgid]=0 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[nguid]="00000000000000000000000000000000"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[nguid]=00000000000000000000000000000000 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[eui64]="0000000000000000"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[eui64]=0000000000000000 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n2 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/ng2n3 ]] 00:09:13.633 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=ng2n3 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get ng2n3 id-ns /dev/ng2n3 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=ng2n3 reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'ng2n3=()' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/ng2n3 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsze]="0x100000"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsze]=0x100000 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[ncap]="0x100000"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[ncap]=0x100000 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nuse]="0x100000"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nuse]=0x100000 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsfeat]="0x14"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsfeat]=0x14 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nlbaf]="7"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nlbaf]=7 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[flbas]="0x4"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[flbas]=0x4 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mc]="0x3"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mc]=0x3 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dpc]="0x1f"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dpc]=0x1f 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dps]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dps]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nmic]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nmic]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[rescap]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[rescap]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[fpi]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[fpi]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[dlfeat]="1"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[dlfeat]=1 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawun]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawun]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nawupf]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nawupf]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nacwu]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nacwu]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabsn]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabsn]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabo]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabo]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nabspf]="0"' 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nabspf]=0 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.634 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[noiob]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[noiob]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmcap]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmcap]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwg]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwg]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npwa]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npwa]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npdg]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npdg]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[npda]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[npda]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nows]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nows]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mssrl]="128"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mssrl]=128 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[mcl]="128"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[mcl]=128 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[msrc]="127"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[msrc]=127 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nulbaf]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nulbaf]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[anagrpid]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[anagrpid]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nsattr]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nsattr]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nvmsetid]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nvmsetid]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[endgid]="0"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[endgid]=0 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[nguid]="00000000000000000000000000000000"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[nguid]=00000000000000000000000000000000 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[eui64]="0000000000000000"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[eui64]=0000000000000000 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'ng2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # ng2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=ng2n3 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n1 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n1 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n1 id-ns /dev/nvme2n1 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n1 reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n1=()' 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n1 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.635 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsze]="0x100000"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsze]=0x100000 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[ncap]="0x100000"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[ncap]=0x100000 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nuse]="0x100000"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nuse]=0x100000 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsfeat]="0x14"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsfeat]=0x14 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nlbaf]="7"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nlbaf]=7 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[flbas]="0x4"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[flbas]=0x4 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mc]="0x3"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mc]=0x3 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dpc]="0x1f"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dpc]=0x1f 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dps]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dps]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nmic]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nmic]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[rescap]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[rescap]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[fpi]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[fpi]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[dlfeat]="1"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[dlfeat]=1 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawun]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawun]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nawupf]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nawupf]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nacwu]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nacwu]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabsn]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabsn]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabo]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabo]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nabspf]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nabspf]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[noiob]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[noiob]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmcap]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmcap]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwg]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwg]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npwa]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npwa]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npdg]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npdg]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[npda]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[npda]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nows]="0"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nows]=0 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mssrl]="128"' 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mssrl]=128 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.636 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[mcl]="128"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[mcl]=128 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[msrc]="127"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[msrc]=127 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nulbaf]="0"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nulbaf]=0 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[anagrpid]="0"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[anagrpid]=0 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nsattr]="0"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nsattr]=0 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nvmsetid]="0"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nvmsetid]=0 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[endgid]="0"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[endgid]=0 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[nguid]="00000000000000000000000000000000"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[nguid]=00000000000000000000000000000000 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[eui64]="0000000000000000"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[eui64]=0000000000000000 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n1[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n1[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n1 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n2 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n2 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n2 id-ns /dev/nvme2n2 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n2 reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n2=()' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n2 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsze]="0x100000"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsze]=0x100000 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[ncap]="0x100000"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[ncap]=0x100000 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nuse]="0x100000"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nuse]=0x100000 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsfeat]="0x14"' 00:09:13.637 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsfeat]=0x14 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nlbaf]="7"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nlbaf]=7 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[flbas]="0x4"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[flbas]=0x4 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mc]="0x3"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mc]=0x3 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dpc]="0x1f"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dpc]=0x1f 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dps]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dps]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nmic]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nmic]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[rescap]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[rescap]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[fpi]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[fpi]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[dlfeat]="1"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[dlfeat]=1 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawun]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawun]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nawupf]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nawupf]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nacwu]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nacwu]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabsn]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabsn]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabo]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabo]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nabspf]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nabspf]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[noiob]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[noiob]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmcap]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmcap]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwg]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwg]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npwa]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npwa]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npdg]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npdg]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[npda]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[npda]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nows]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nows]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mssrl]="128"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mssrl]=128 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[mcl]="128"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[mcl]=128 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[msrc]="127"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[msrc]=127 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nulbaf]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nulbaf]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[anagrpid]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[anagrpid]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nsattr]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nsattr]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nvmsetid]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nvmsetid]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[endgid]="0"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[endgid]=0 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[nguid]="00000000000000000000000000000000"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[nguid]=00000000000000000000000000000000 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[eui64]="0000000000000000"' 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[eui64]=0000000000000000 00:09:13.638 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n2[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n2[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n2 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@54 -- # for ns in "$ctrl/"@("ng${ctrl##*nvme}"|"${ctrl##*/}n")* 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@55 -- # [[ -e /sys/class/nvme/nvme2/nvme2n3 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@56 -- # ns_dev=nvme2n3 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@57 -- # nvme_get nvme2n3 id-ns /dev/nvme2n3 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme2n3 reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme2n3=()' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ns /dev/nvme2n3 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsze]="0x100000"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsze]=0x100000 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[ncap]="0x100000"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[ncap]=0x100000 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100000 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nuse]="0x100000"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nuse]=0x100000 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x14 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsfeat]="0x14"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsfeat]=0x14 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nlbaf]="7"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nlbaf]=7 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x4 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[flbas]="0x4"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[flbas]=0x4 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mc]="0x3"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mc]=0x3 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1f ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dpc]="0x1f"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dpc]=0x1f 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dps]="0"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dps]=0 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nmic]="0"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nmic]=0 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[rescap]="0"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[rescap]=0 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[fpi]="0"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[fpi]=0 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[dlfeat]="1"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[dlfeat]=1 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawun]="0"' 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawun]=0 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.639 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nawupf]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nawupf]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nacwu]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nacwu]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabsn]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabsn]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabo]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabo]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nabspf]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nabspf]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[noiob]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[noiob]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmcap]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmcap]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwg]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwg]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npwa]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npwa]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npdg]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npdg]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[npda]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[npda]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nows]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nows]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mssrl]="128"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mssrl]=128 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 128 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[mcl]="128"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[mcl]=128 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 127 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[msrc]="127"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[msrc]=127 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nulbaf]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nulbaf]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[anagrpid]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[anagrpid]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nsattr]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nsattr]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nvmsetid]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nvmsetid]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[endgid]="0"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[endgid]=0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000000000000000000000000000 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[nguid]="00000000000000000000000000000000"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[nguid]=00000000000000000000000000000000 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0000000000000000 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[eui64]="0000000000000000"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[eui64]=0000000000000000 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:9 rp:0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf0]="ms:0 lbads:9 rp:0 "' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf0]='ms:0 lbads:9 rp:0 ' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:9 rp:0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf1]="ms:8 lbads:9 rp:0 "' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf1]='ms:8 lbads:9 rp:0 ' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:9 rp:0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf2]="ms:16 lbads:9 rp:0 "' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf2]='ms:16 lbads:9 rp:0 ' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:9 rp:0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf3]="ms:64 lbads:9 rp:0 "' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf3]='ms:64 lbads:9 rp:0 ' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:0 lbads:12 rp:0 (in use) ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf4]="ms:0 lbads:12 rp:0 (in use)"' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf4]='ms:0 lbads:12 rp:0 (in use)' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:8 lbads:12 rp:0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf5]="ms:8 lbads:12 rp:0 "' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf5]='ms:8 lbads:12 rp:0 ' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:16 lbads:12 rp:0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf6]="ms:16 lbads:12 rp:0 "' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf6]='ms:16 lbads:12 rp:0 ' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n ms:64 lbads:12 rp:0 ]] 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme2n3[lbaf7]="ms:64 lbads:12 rp:0 "' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme2n3[lbaf7]='ms:64 lbads:12 rp:0 ' 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@58 -- # _ctrl_ns[${ns##*n}]=nvme2n3 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme2 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme2_ns 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:12.0 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme2 00:09:13.640 02:04:38 nvme_fdp -- nvme/functions.sh@47 -- # for ctrl in /sys/class/nvme/nvme* 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@48 -- # [[ -e /sys/class/nvme/nvme3 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@49 -- # pci=0000:00:13.0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@50 -- # pci_can_use 0000:00:13.0 00:09:13.641 02:04:38 nvme_fdp -- scripts/common.sh@18 -- # local i 00:09:13.641 02:04:38 nvme_fdp -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:13.641 02:04:38 nvme_fdp -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:13.641 02:04:38 nvme_fdp -- scripts/common.sh@27 -- # return 0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@51 -- # ctrl_dev=nvme3 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@52 -- # nvme_get nvme3 id-ctrl /dev/nvme3 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@17 -- # local ref=nvme3 reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@18 -- # shift 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@20 -- # local -gA 'nvme3=()' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@16 -- # /usr/local/src/nvme-cli/nvme id-ctrl /dev/nvme3 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n '' ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1b36 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vid]="0x1b36"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vid]=0x1b36 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1af4 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ssvid]="0x1af4"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ssvid]=0x1af4 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 12343 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sn]="12343 "' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sn]='12343 ' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n QEMU NVMe Ctrl ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mn]="QEMU NVMe Ctrl "' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mn]='QEMU NVMe Ctrl ' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 8.0.0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fr]="8.0.0 "' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fr]='8.0.0 ' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 6 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rab]="6"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rab]=6 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 525400 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ieee]="525400"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ieee]=525400 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x2 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cmic]="0x2"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cmic]=0x2 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 7 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mdts]="7"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mdts]=7 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntlid]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntlid]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x10400 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ver]="0x10400"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ver]=0x10400 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3r]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3r]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rtd3e]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rtd3e]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x100 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oaes]="0x100"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oaes]=0x100 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x88010 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ctratt]="0x88010"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ctratt]=0x88010 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rrls]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rrls]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cntrltype]="1"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cntrltype]=1 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 00000000-0000-0000-0000-000000000000 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fguid]="00000000-0000-0000-0000-000000000000"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fguid]=00000000-0000-0000-0000-000000000000 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt1]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt1]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt2]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt2]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[crdt3]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[crdt3]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nvmsr]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nvmsr]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwci]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwci]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mec]="0"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mec]=0 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x12a ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oacs]="0x12a"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oacs]=0x12a 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acl]="3"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acl]=3 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 3 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[aerl]="3"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[aerl]=3 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[frmw]="0x3"' 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[frmw]=0x3 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.641 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[lpa]="0x7"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[lpa]=0x7 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[elpe]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[elpe]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[npss]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[npss]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[avscc]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[avscc]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[apsta]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[apsta]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 343 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[wctemp]="343"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[wctemp]=343 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 373 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cctemp]="373"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cctemp]=373 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mtfa]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mtfa]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmpre]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmpre]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmin]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmin]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[tnvmcap]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[tnvmcap]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[unvmcap]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[unvmcap]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rpmbs]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rpmbs]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[edstt]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[edstt]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[dsto]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[dsto]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fwug]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fwug]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[kas]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[kas]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hctma]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hctma]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mntmt]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mntmt]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mxtmt]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mxtmt]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sanicap]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sanicap]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmminds]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmminds]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[hmmaxd]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[hmmaxd]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nsetidmax]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nsetidmax]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 1 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[endgidmax]="1"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[endgidmax]=1 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anatt]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anatt]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anacap]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anacap]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[anagrpmax]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[anagrpmax]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nanagrpid]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nanagrpid]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[pels]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[pels]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[domainid]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[domainid]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[megcap]="0"' 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[megcap]=0 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x66 ]] 00:09:13.642 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sqes]="0x66"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sqes]=0x66 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x44 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[cqes]="0x44"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[cqes]=0x44 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcmd]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcmd]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 256 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nn]="256"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nn]=256 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x15d ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[oncs]="0x15d"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[oncs]=0x15d 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fuses]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fuses]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fna]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fna]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x7 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[vwc]="0x7"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[vwc]=0x7 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awun]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awun]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[awupf]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[awupf]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icsvscc]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icsvscc]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[nwpc]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[nwpc]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[acwu]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[acwu]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x3 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ocfs]="0x3"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ocfs]=0x3 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0x1 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[sgls]="0x1"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[sgls]=0x1 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[mnan]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[mnan]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxdna]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxdna]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[maxcna]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[maxcna]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n nqn.2019-08.org.qemu:fdp-subsys3 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[subnqn]="nqn.2019-08.org.qemu:fdp-subsys3"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[subnqn]=nqn.2019-08.org.qemu:fdp-subsys3 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ioccsz]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ioccsz]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[iorcsz]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[iorcsz]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[icdoff]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[icdoff]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[fcatt]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[fcatt]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[msdbd]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[msdbd]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ofcs]="0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ofcs]=0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0 ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[ps0]="mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[ps0]='mp:25.00W operational enlat:16 exlat:4 rrt:0 rrl:0' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n 0 rwl:0 idle_power:- active_power:- ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[rwt]="0 rwl:0 idle_power:- active_power:-"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[rwt]='0 rwl:0 idle_power:- active_power:-' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@22 -- # [[ -n - ]] 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # eval 'nvme3[active_power_workload]="-"' 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@23 -- # nvme3[active_power_workload]=- 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # IFS=: 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@21 -- # read -r reg val 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@53 -- # local -n _ctrl_ns=nvme3_ns 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@60 -- # ctrls["$ctrl_dev"]=nvme3 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@61 -- # nvmes["$ctrl_dev"]=nvme3_ns 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@62 -- # bdfs["$ctrl_dev"]=0000:00:13.0 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@63 -- # ordered_ctrls[${ctrl_dev/nvme/}]=nvme3 00:09:13.643 02:04:38 nvme_fdp -- nvme/functions.sh@65 -- # (( 4 > 0 )) 00:09:13.905 02:04:38 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # get_ctrl_with_feature fdp 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@204 -- # local _ctrls feature=fdp 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@206 -- # _ctrls=($(get_ctrls_with_feature "$feature")) 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@206 -- # get_ctrls_with_feature fdp 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@192 -- # (( 4 == 0 )) 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@194 -- # local ctrl feature=fdp 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@196 -- # type -t ctrl_has_fdp 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@196 -- # [[ function == function ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme1 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme1 ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme1 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme1 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme1 ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme1 reg=ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme1 ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme1 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme0 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme0 ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme0 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme0 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme0 ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme0 reg=ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme0 ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme0 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme3 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme3 ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme3 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme3 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme3 ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme3 reg=ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme3 ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme3 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x88010 ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x88010 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x88010 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@199 -- # echo nvme3 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@198 -- # for ctrl in "${!ctrls[@]}" 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@199 -- # ctrl_has_fdp nvme2 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@176 -- # local ctrl=nvme2 ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@178 -- # get_ctratt nvme2 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@166 -- # local ctrl=nvme2 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@167 -- # get_nvme_ctrl_feature nvme2 ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@69 -- # local ctrl=nvme2 reg=ctratt 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@71 -- # [[ -n nvme2 ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@73 -- # local -n _ctrl=nvme2 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@75 -- # [[ -n 0x8000 ]] 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@76 -- # echo 0x8000 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@178 -- # ctratt=0x8000 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@180 -- # (( ctratt & 1 << 19 )) 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@207 -- # (( 1 > 0 )) 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@208 -- # echo nvme3 00:09:13.905 02:04:38 nvme_fdp -- nvme/functions.sh@209 -- # return 0 00:09:13.905 02:04:38 nvme_fdp -- nvme/nvme_fdp.sh@13 -- # ctrl=nvme3 00:09:13.905 02:04:38 nvme_fdp -- nvme/nvme_fdp.sh@14 -- # bdf=0000:00:13.0 00:09:13.905 02:04:38 nvme_fdp -- nvme/nvme_fdp.sh@16 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:14.166 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:14.738 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.738 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.738 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.738 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:09:14.738 02:04:39 nvme_fdp -- nvme/nvme_fdp.sh@18 -- # run_test nvme_flexible_data_placement /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:14.738 02:04:39 nvme_fdp -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:09:14.738 02:04:39 nvme_fdp -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:14.738 02:04:39 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:14.738 ************************************ 00:09:14.738 START TEST nvme_flexible_data_placement 00:09:14.738 ************************************ 00:09:14.738 02:04:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/fdp/fdp -r 'trtype:pcie traddr:0000:00:13.0' 00:09:15.000 Initializing NVMe Controllers 00:09:15.000 Attaching to 0000:00:13.0 00:09:15.000 Controller supports FDP Attached to 0000:00:13.0 00:09:15.000 Namespace ID: 1 Endurance Group ID: 1 00:09:15.000 Initialization complete. 00:09:15.000 00:09:15.000 ================================== 00:09:15.000 == FDP tests for Namespace: #01 == 00:09:15.000 ================================== 00:09:15.000 00:09:15.000 Get Feature: FDP: 00:09:15.000 ================= 00:09:15.000 Enabled: Yes 00:09:15.000 FDP configuration Index: 0 00:09:15.000 00:09:15.000 FDP configurations log page 00:09:15.000 =========================== 00:09:15.000 Number of FDP configurations: 1 00:09:15.000 Version: 0 00:09:15.000 Size: 112 00:09:15.000 FDP Configuration Descriptor: 0 00:09:15.000 Descriptor Size: 96 00:09:15.000 Reclaim Group Identifier format: 2 00:09:15.000 FDP Volatile Write Cache: Not Present 00:09:15.000 FDP Configuration: Valid 00:09:15.000 Vendor Specific Size: 0 00:09:15.000 Number of Reclaim Groups: 2 00:09:15.000 Number of Recalim Unit Handles: 8 00:09:15.000 Max Placement Identifiers: 128 00:09:15.000 Number of Namespaces Suppprted: 256 00:09:15.000 Reclaim unit Nominal Size: 6000000 bytes 00:09:15.000 Estimated Reclaim Unit Time Limit: Not Reported 00:09:15.000 RUH Desc #000: RUH Type: Initially Isolated 00:09:15.000 RUH Desc #001: RUH Type: Initially Isolated 00:09:15.000 RUH Desc #002: RUH Type: Initially Isolated 00:09:15.000 RUH Desc #003: RUH Type: Initially Isolated 00:09:15.001 RUH Desc #004: RUH Type: Initially Isolated 00:09:15.001 RUH Desc #005: RUH Type: Initially Isolated 00:09:15.001 RUH Desc #006: RUH Type: Initially Isolated 00:09:15.001 RUH Desc #007: RUH Type: Initially Isolated 00:09:15.001 00:09:15.001 FDP reclaim unit handle usage log page 00:09:15.001 ====================================== 00:09:15.001 Number of Reclaim Unit Handles: 8 00:09:15.001 RUH Usage Desc #000: RUH Attributes: Controller Specified 00:09:15.001 RUH Usage Desc #001: RUH Attributes: Unused 00:09:15.001 RUH Usage Desc #002: RUH Attributes: Unused 00:09:15.001 RUH Usage Desc #003: RUH Attributes: Unused 00:09:15.001 RUH Usage Desc #004: RUH Attributes: Unused 00:09:15.001 RUH Usage Desc #005: RUH Attributes: Unused 00:09:15.001 RUH Usage Desc #006: RUH Attributes: Unused 00:09:15.001 RUH Usage Desc #007: RUH Attributes: Unused 00:09:15.001 00:09:15.001 FDP statistics log page 00:09:15.001 ======================= 00:09:15.001 Host bytes with metadata written: 1090224128 00:09:15.001 Media bytes with metadata written: 1090396160 00:09:15.001 Media bytes erased: 0 00:09:15.001 00:09:15.001 FDP Reclaim unit handle status 00:09:15.001 ============================== 00:09:15.001 Number of RUHS descriptors: 2 00:09:15.001 RUHS Desc: #0000 PID: 0x0000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000001048 00:09:15.001 RUHS Desc: #0001 PID: 0x4000 RUHID: 0x0000 ERUT: 0x00000000 RUAMW: 0x0000000000006000 00:09:15.001 00:09:15.001 FDP write on placement id: 0 success 00:09:15.001 00:09:15.001 Set Feature: Enabling FDP events on Placement handle: #0 Success 00:09:15.001 00:09:15.001 IO mgmt send: RUH update for Placement ID: #0 Success 00:09:15.001 00:09:15.001 Get Feature: FDP Events for Placement handle: #0 00:09:15.001 ======================== 00:09:15.001 Number of FDP Events: 6 00:09:15.001 FDP Event: #0 Type: RU Not Written to Capacity Enabled: Yes 00:09:15.001 FDP Event: #1 Type: RU Time Limit Exceeded Enabled: Yes 00:09:15.001 FDP Event: #2 Type: Ctrlr Reset Modified RUH's Enabled: Yes 00:09:15.001 FDP Event: #3 Type: Invalid Placement Identifier Enabled: Yes 00:09:15.001 FDP Event: #4 Type: Media Reallocated Enabled: No 00:09:15.001 FDP Event: #5 Type: Implicitly modified RUH Enabled: No 00:09:15.001 00:09:15.001 FDP events log page 00:09:15.001 =================== 00:09:15.001 Number of FDP events: 1 00:09:15.001 FDP Event #0: 00:09:15.001 Event Type: RU Not Written to Capacity 00:09:15.001 Placement Identifier: Valid 00:09:15.001 NSID: Valid 00:09:15.001 Location: Valid 00:09:15.001 Placement Identifier: 0 00:09:15.001 Event Timestamp: 7 00:09:15.001 Namespace Identifier: 1 00:09:15.001 Reclaim Group Identifier: 0 00:09:15.001 Reclaim Unit Handle Identifier: 0 00:09:15.001 00:09:15.001 FDP test passed 00:09:15.001 00:09:15.001 real 0m0.240s 00:09:15.001 user 0m0.072s 00:09:15.001 sys 0m0.066s 00:09:15.001 ************************************ 00:09:15.001 02:04:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.001 02:04:39 nvme_fdp.nvme_flexible_data_placement -- common/autotest_common.sh@10 -- # set +x 00:09:15.001 END TEST nvme_flexible_data_placement 00:09:15.001 ************************************ 00:09:15.001 00:09:15.001 real 0m7.715s 00:09:15.001 user 0m1.104s 00:09:15.001 sys 0m1.376s 00:09:15.001 ************************************ 00:09:15.001 END TEST nvme_fdp 00:09:15.001 02:04:39 nvme_fdp -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:15.001 02:04:39 nvme_fdp -- common/autotest_common.sh@10 -- # set +x 00:09:15.001 ************************************ 00:09:15.262 02:04:39 -- spdk/autotest.sh@232 -- # [[ '' -eq 1 ]] 00:09:15.262 02:04:39 -- spdk/autotest.sh@236 -- # run_test nvme_rpc /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:15.262 02:04:39 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:15.262 02:04:39 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:15.262 02:04:39 -- common/autotest_common.sh@10 -- # set +x 00:09:15.262 ************************************ 00:09:15.262 START TEST nvme_rpc 00:09:15.262 ************************************ 00:09:15.262 02:04:39 nvme_rpc -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc.sh 00:09:15.262 * Looking for test storage... 00:09:15.262 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:15.262 02:04:39 nvme_rpc -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:15.262 02:04:39 nvme_rpc -- common/autotest_common.sh@1711 -- # lcov --version 00:09:15.262 02:04:39 nvme_rpc -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:15.262 02:04:39 nvme_rpc -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@336 -- # IFS=.-: 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@336 -- # read -ra ver1 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@337 -- # IFS=.-: 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@337 -- # read -ra ver2 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@338 -- # local 'op=<' 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@340 -- # ver1_l=2 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@341 -- # ver2_l=1 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:15.262 02:04:39 nvme_rpc -- scripts/common.sh@344 -- # case "$op" in 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@345 -- # : 1 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@365 -- # decimal 1 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@353 -- # local d=1 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@355 -- # echo 1 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@365 -- # ver1[v]=1 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@366 -- # decimal 2 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@353 -- # local d=2 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@355 -- # echo 2 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@366 -- # ver2[v]=2 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:15.263 02:04:39 nvme_rpc -- scripts/common.sh@368 -- # return 0 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.263 --rc genhtml_branch_coverage=1 00:09:15.263 --rc genhtml_function_coverage=1 00:09:15.263 --rc genhtml_legend=1 00:09:15.263 --rc geninfo_all_blocks=1 00:09:15.263 --rc geninfo_unexecuted_blocks=1 00:09:15.263 00:09:15.263 ' 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.263 --rc genhtml_branch_coverage=1 00:09:15.263 --rc genhtml_function_coverage=1 00:09:15.263 --rc genhtml_legend=1 00:09:15.263 --rc geninfo_all_blocks=1 00:09:15.263 --rc geninfo_unexecuted_blocks=1 00:09:15.263 00:09:15.263 ' 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.263 --rc genhtml_branch_coverage=1 00:09:15.263 --rc genhtml_function_coverage=1 00:09:15.263 --rc genhtml_legend=1 00:09:15.263 --rc geninfo_all_blocks=1 00:09:15.263 --rc geninfo_unexecuted_blocks=1 00:09:15.263 00:09:15.263 ' 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:15.263 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:15.263 --rc genhtml_branch_coverage=1 00:09:15.263 --rc genhtml_function_coverage=1 00:09:15.263 --rc genhtml_legend=1 00:09:15.263 --rc geninfo_all_blocks=1 00:09:15.263 --rc geninfo_unexecuted_blocks=1 00:09:15.263 00:09:15.263 ' 00:09:15.263 02:04:39 nvme_rpc -- nvme/nvme_rpc.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:15.263 02:04:39 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # get_first_nvme_bdf 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1509 -- # bdfs=() 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1509 -- # local bdfs 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1510 -- # bdfs=($(get_nvme_bdfs)) 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1510 -- # get_nvme_bdfs 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1498 -- # bdfs=() 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1498 -- # local bdfs 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1499 -- # bdfs=($("$rootdir/scripts/gen_nvme.sh" | jq -r '.config[].params.traddr')) 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1499 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1499 -- # jq -r '.config[].params.traddr' 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1500 -- # (( 4 == 0 )) 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1504 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@1512 -- # echo 0000:00:10.0 00:09:15.263 02:04:39 nvme_rpc -- nvme/nvme_rpc.sh@13 -- # bdf=0000:00:10.0 00:09:15.263 02:04:39 nvme_rpc -- nvme/nvme_rpc.sh@16 -- # spdk_tgt_pid=67508 00:09:15.263 02:04:39 nvme_rpc -- nvme/nvme_rpc.sh@17 -- # trap 'kill -9 ${spdk_tgt_pid}; exit 1' SIGINT SIGTERM EXIT 00:09:15.263 02:04:39 nvme_rpc -- nvme/nvme_rpc.sh@19 -- # waitforlisten 67508 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 67508 ']' 00:09:15.263 02:04:39 nvme_rpc -- nvme/nvme_rpc.sh@15 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:15.263 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:15.263 02:04:39 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:15.524 [2024-12-15 02:04:40.054676] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:15.524 [2024-12-15 02:04:40.054800] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67508 ] 00:09:15.524 [2024-12-15 02:04:40.212406] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:15.785 [2024-12-15 02:04:40.310727] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:15.785 [2024-12-15 02:04:40.310801] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:16.356 02:04:40 nvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:16.356 02:04:40 nvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:09:16.356 02:04:40 nvme_rpc -- nvme/nvme_rpc.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b Nvme0 -t PCIe -a 0000:00:10.0 00:09:16.617 Nvme0n1 00:09:16.617 02:04:41 nvme_rpc -- nvme/nvme_rpc.sh@27 -- # '[' -f non_existing_file ']' 00:09:16.617 02:04:41 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_apply_firmware non_existing_file Nvme0n1 00:09:16.617 request: 00:09:16.617 { 00:09:16.617 "bdev_name": "Nvme0n1", 00:09:16.617 "filename": "non_existing_file", 00:09:16.617 "method": "bdev_nvme_apply_firmware", 00:09:16.617 "req_id": 1 00:09:16.617 } 00:09:16.617 Got JSON-RPC error response 00:09:16.617 response: 00:09:16.617 { 00:09:16.617 "code": -32603, 00:09:16.617 "message": "open file failed." 00:09:16.617 } 00:09:16.617 02:04:41 nvme_rpc -- nvme/nvme_rpc.sh@32 -- # rv=1 00:09:16.617 02:04:41 nvme_rpc -- nvme/nvme_rpc.sh@33 -- # '[' -z 1 ']' 00:09:16.617 02:04:41 nvme_rpc -- nvme/nvme_rpc.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_detach_controller Nvme0 00:09:16.877 02:04:41 nvme_rpc -- nvme/nvme_rpc.sh@39 -- # trap - SIGINT SIGTERM EXIT 00:09:16.877 02:04:41 nvme_rpc -- nvme/nvme_rpc.sh@40 -- # killprocess 67508 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 67508 ']' 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@958 -- # kill -0 67508 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@959 -- # uname 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67508 00:09:16.877 killing process with pid 67508 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67508' 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@973 -- # kill 67508 00:09:16.877 02:04:41 nvme_rpc -- common/autotest_common.sh@978 -- # wait 67508 00:09:18.786 00:09:18.786 real 0m3.243s 00:09:18.786 user 0m6.199s 00:09:18.786 sys 0m0.485s 00:09:18.786 02:04:43 nvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:18.786 02:04:43 nvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:09:18.786 ************************************ 00:09:18.786 END TEST nvme_rpc 00:09:18.786 ************************************ 00:09:18.786 02:04:43 -- spdk/autotest.sh@237 -- # run_test nvme_rpc_timeouts /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:18.786 02:04:43 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:18.786 02:04:43 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:18.786 02:04:43 -- common/autotest_common.sh@10 -- # set +x 00:09:18.786 ************************************ 00:09:18.786 START TEST nvme_rpc_timeouts 00:09:18.786 ************************************ 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/nvme_rpc_timeouts.sh 00:09:18.786 * Looking for test storage... 00:09:18.786 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lcov --version 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@336 -- # IFS=.-: 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@336 -- # read -ra ver1 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@337 -- # IFS=.-: 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@337 -- # read -ra ver2 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@338 -- # local 'op=<' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@340 -- # ver1_l=2 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@341 -- # ver2_l=1 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@344 -- # case "$op" in 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@345 -- # : 1 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@365 -- # decimal 1 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=1 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 1 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@365 -- # ver1[v]=1 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@366 -- # decimal 2 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@353 -- # local d=2 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@355 -- # echo 2 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@366 -- # ver2[v]=2 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:18.786 02:04:43 nvme_rpc_timeouts -- scripts/common.sh@368 -- # return 0 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:18.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.786 --rc genhtml_branch_coverage=1 00:09:18.786 --rc genhtml_function_coverage=1 00:09:18.786 --rc genhtml_legend=1 00:09:18.786 --rc geninfo_all_blocks=1 00:09:18.786 --rc geninfo_unexecuted_blocks=1 00:09:18.786 00:09:18.786 ' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:18.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.786 --rc genhtml_branch_coverage=1 00:09:18.786 --rc genhtml_function_coverage=1 00:09:18.786 --rc genhtml_legend=1 00:09:18.786 --rc geninfo_all_blocks=1 00:09:18.786 --rc geninfo_unexecuted_blocks=1 00:09:18.786 00:09:18.786 ' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:18.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.786 --rc genhtml_branch_coverage=1 00:09:18.786 --rc genhtml_function_coverage=1 00:09:18.786 --rc genhtml_legend=1 00:09:18.786 --rc geninfo_all_blocks=1 00:09:18.786 --rc geninfo_unexecuted_blocks=1 00:09:18.786 00:09:18.786 ' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:18.786 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:18.786 --rc genhtml_branch_coverage=1 00:09:18.786 --rc genhtml_function_coverage=1 00:09:18.786 --rc genhtml_legend=1 00:09:18.786 --rc geninfo_all_blocks=1 00:09:18.786 --rc geninfo_unexecuted_blocks=1 00:09:18.786 00:09:18.786 ' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@19 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:09:18.786 02:04:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@21 -- # tmpfile_default_settings=/tmp/settings_default_67573 00:09:18.786 02:04:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@22 -- # tmpfile_modified_settings=/tmp/settings_modified_67573 00:09:18.786 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:09:18.786 02:04:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@25 -- # spdk_tgt_pid=67605 00:09:18.786 02:04:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@26 -- # trap 'kill -9 ${spdk_tgt_pid}; rm -f ${tmpfile_default_settings} ${tmpfile_modified_settings} ; exit 1' SIGINT SIGTERM EXIT 00:09:18.786 02:04:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@27 -- # waitforlisten 67605 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@835 -- # '[' -z 67605 ']' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@840 -- # local max_retries=100 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:09:18.786 02:04:43 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@24 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@844 -- # xtrace_disable 00:09:18.786 02:04:43 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:18.787 [2024-12-15 02:04:43.301945] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:09:18.787 [2024-12-15 02:04:43.302033] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid67605 ] 00:09:18.787 [2024-12-15 02:04:43.454143] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:09:19.046 [2024-12-15 02:04:43.550781] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:09:19.046 [2024-12-15 02:04:43.550852] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:09:19.612 02:04:44 nvme_rpc_timeouts -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:09:19.612 02:04:44 nvme_rpc_timeouts -- common/autotest_common.sh@868 -- # return 0 00:09:19.612 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@29 -- # echo Checking default timeout settings: 00:09:19.612 Checking default timeout settings: 00:09:19.612 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:19.870 Making settings changes with rpc: 00:09:19.870 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@32 -- # echo Making settings changes with rpc: 00:09:19.870 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_set_options --timeout-us=12000000 --timeout-admin-us=24000000 --action-on-timeout=abort 00:09:20.128 Check default vs. modified settings: 00:09:20.128 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@36 -- # echo Check default vs. modified settings: 00:09:20.128 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@37 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@38 -- # settings_to_check='action_on_timeout timeout_us timeout_admin_us' 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep action_on_timeout /tmp/settings_default_67573 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=none 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep action_on_timeout /tmp/settings_modified_67573 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.386 Setting action_on_timeout is changed as expected. 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=abort 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' none == abort ']' 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting action_on_timeout is changed as expected. 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_us /tmp/settings_default_67573 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:20.386 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.387 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.387 02:04:44 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_us /tmp/settings_modified_67573 00:09:20.387 Setting timeout_us is changed as expected. 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=12000000 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 12000000 ']' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_us is changed as expected. 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@39 -- # for setting in $settings_to_check 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # grep timeout_admin_us /tmp/settings_default_67573 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # awk '{print $2}' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@40 -- # setting_before=0 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # awk '{print $2}' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # sed 's/[^a-zA-Z0-9]//g' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # grep timeout_admin_us /tmp/settings_modified_67573 00:09:20.387 Setting timeout_admin_us is changed as expected. 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@41 -- # setting_modified=24000000 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@42 -- # '[' 0 == 24000000 ']' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@47 -- # echo Setting timeout_admin_us is changed as expected. 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@52 -- # trap - SIGINT SIGTERM EXIT 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@53 -- # rm -f /tmp/settings_default_67573 /tmp/settings_modified_67573 00:09:20.387 02:04:45 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@54 -- # killprocess 67605 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@954 -- # '[' -z 67605 ']' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@958 -- # kill -0 67605 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # uname 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 67605 00:09:20.387 killing process with pid 67605 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@972 -- # echo 'killing process with pid 67605' 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@973 -- # kill 67605 00:09:20.387 02:04:45 nvme_rpc_timeouts -- common/autotest_common.sh@978 -- # wait 67605 00:09:21.763 RPC TIMEOUT SETTING TEST PASSED. 00:09:21.763 02:04:46 nvme_rpc_timeouts -- nvme/nvme_rpc_timeouts.sh@56 -- # echo RPC TIMEOUT SETTING TEST PASSED. 00:09:21.763 00:09:21.763 real 0m3.312s 00:09:21.763 user 0m6.473s 00:09:21.763 sys 0m0.469s 00:09:21.763 ************************************ 00:09:21.763 END TEST nvme_rpc_timeouts 00:09:21.763 ************************************ 00:09:21.763 02:04:46 nvme_rpc_timeouts -- common/autotest_common.sh@1130 -- # xtrace_disable 00:09:21.763 02:04:46 nvme_rpc_timeouts -- common/autotest_common.sh@10 -- # set +x 00:09:21.763 02:04:46 -- spdk/autotest.sh@239 -- # uname -s 00:09:21.763 02:04:46 -- spdk/autotest.sh@239 -- # '[' Linux = Linux ']' 00:09:21.763 02:04:46 -- spdk/autotest.sh@240 -- # run_test sw_hotplug /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:21.763 02:04:46 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:09:21.763 02:04:46 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:09:21.763 02:04:46 -- common/autotest_common.sh@10 -- # set +x 00:09:21.763 ************************************ 00:09:21.763 START TEST sw_hotplug 00:09:21.763 ************************************ 00:09:21.763 02:04:46 sw_hotplug -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh 00:09:21.763 * Looking for test storage... 00:09:21.763 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme 00:09:21.763 02:04:46 sw_hotplug -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:09:21.763 02:04:46 sw_hotplug -- common/autotest_common.sh@1711 -- # lcov --version 00:09:21.763 02:04:46 sw_hotplug -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:09:22.025 02:04:46 sw_hotplug -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@333 -- # local ver1 ver1_l 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@334 -- # local ver2 ver2_l 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@336 -- # IFS=.-: 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@336 -- # read -ra ver1 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@337 -- # IFS=.-: 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@337 -- # read -ra ver2 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@338 -- # local 'op=<' 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@340 -- # ver1_l=2 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@341 -- # ver2_l=1 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@344 -- # case "$op" in 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@345 -- # : 1 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@364 -- # (( v = 0 )) 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@365 -- # decimal 1 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@353 -- # local d=1 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@355 -- # echo 1 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@365 -- # ver1[v]=1 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@366 -- # decimal 2 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@353 -- # local d=2 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@355 -- # echo 2 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@366 -- # ver2[v]=2 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:09:22.025 02:04:46 sw_hotplug -- scripts/common.sh@368 -- # return 0 00:09:22.025 02:04:46 sw_hotplug -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:09:22.025 02:04:46 sw_hotplug -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:09:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.025 --rc genhtml_branch_coverage=1 00:09:22.025 --rc genhtml_function_coverage=1 00:09:22.025 --rc genhtml_legend=1 00:09:22.025 --rc geninfo_all_blocks=1 00:09:22.025 --rc geninfo_unexecuted_blocks=1 00:09:22.025 00:09:22.025 ' 00:09:22.025 02:04:46 sw_hotplug -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:09:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.025 --rc genhtml_branch_coverage=1 00:09:22.025 --rc genhtml_function_coverage=1 00:09:22.025 --rc genhtml_legend=1 00:09:22.025 --rc geninfo_all_blocks=1 00:09:22.025 --rc geninfo_unexecuted_blocks=1 00:09:22.025 00:09:22.025 ' 00:09:22.025 02:04:46 sw_hotplug -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:09:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.025 --rc genhtml_branch_coverage=1 00:09:22.025 --rc genhtml_function_coverage=1 00:09:22.025 --rc genhtml_legend=1 00:09:22.025 --rc geninfo_all_blocks=1 00:09:22.025 --rc geninfo_unexecuted_blocks=1 00:09:22.025 00:09:22.025 ' 00:09:22.025 02:04:46 sw_hotplug -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:09:22.025 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:09:22.025 --rc genhtml_branch_coverage=1 00:09:22.025 --rc genhtml_function_coverage=1 00:09:22.025 --rc genhtml_legend=1 00:09:22.025 --rc geninfo_all_blocks=1 00:09:22.025 --rc geninfo_unexecuted_blocks=1 00:09:22.025 00:09:22.025 ' 00:09:22.025 02:04:46 sw_hotplug -- nvme/sw_hotplug.sh@129 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:22.285 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.285 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:22.285 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:22.285 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:22.285 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:09:22.285 02:04:47 sw_hotplug -- nvme/sw_hotplug.sh@131 -- # hotplug_wait=6 00:09:22.285 02:04:47 sw_hotplug -- nvme/sw_hotplug.sh@132 -- # hotplug_events=3 00:09:22.285 02:04:47 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvmes=($(nvme_in_userspace)) 00:09:22.285 02:04:47 sw_hotplug -- nvme/sw_hotplug.sh@133 -- # nvme_in_userspace 00:09:22.285 02:04:47 sw_hotplug -- scripts/common.sh@312 -- # local bdf bdfs 00:09:22.285 02:04:47 sw_hotplug -- scripts/common.sh@313 -- # local nvmes 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@315 -- # [[ -n '' ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@318 -- # nvmes=($(iter_pci_class_code 01 08 02)) 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@318 -- # iter_pci_class_code 01 08 02 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@298 -- # local bdf= 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@300 -- # iter_all_pci_class_code 01 08 02 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@233 -- # local class 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@234 -- # local subclass 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@235 -- # local progif 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@236 -- # printf %02x 1 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@236 -- # class=01 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@237 -- # printf %02x 8 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@237 -- # subclass=08 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@238 -- # printf %02x 2 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@238 -- # progif=02 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@240 -- # hash lspci 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@241 -- # '[' 02 '!=' 00 ']' 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@242 -- # lspci -mm -n -D 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@245 -- # tr -d '"' 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@243 -- # grep -i -- -p02 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@244 -- # awk -v 'cc="0108"' -F ' ' '{if (cc ~ $2) print $1}' 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:10.0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:10.0 ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:10.0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:11.0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:11.0 ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:11.0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:12.0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:12.0 ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:12.0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@300 -- # for bdf in $(iter_all_pci_class_code "$@") 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@301 -- # pci_can_use 0000:00:13.0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@18 -- # local i 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@21 -- # [[ =~ 0000:00:13.0 ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@25 -- # [[ -z '' ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@27 -- # return 0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@302 -- # echo 0000:00:13.0 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:10.0 ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:11.0 ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:22.544 02:04:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:12.0 ]] 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@321 -- # for bdf in "${nvmes[@]}" 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@322 -- # [[ -e /sys/bus/pci/drivers/nvme/0000:00:13.0 ]] 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@323 -- # uname -s 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@323 -- # [[ Linux == FreeBSD ]] 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@326 -- # bdfs+=("$bdf") 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@328 -- # (( 4 )) 00:09:22.545 02:04:47 sw_hotplug -- scripts/common.sh@329 -- # printf '%s\n' 0000:00:10.0 0000:00:11.0 0000:00:12.0 0000:00:13.0 00:09:22.545 02:04:47 sw_hotplug -- nvme/sw_hotplug.sh@134 -- # nvme_count=2 00:09:22.545 02:04:47 sw_hotplug -- nvme/sw_hotplug.sh@135 -- # nvmes=("${nvmes[@]::nvme_count}") 00:09:22.545 02:04:47 sw_hotplug -- nvme/sw_hotplug.sh@138 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:09:22.805 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:22.805 Waiting for block devices as requested 00:09:22.805 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.108 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.108 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:09:23.108 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:09:28.394 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:09:28.394 02:04:52 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # PCI_ALLOWED='0000:00:10.0 0000:00:11.0' 00:09:28.394 02:04:52 sw_hotplug -- nvme/sw_hotplug.sh@140 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:09:28.654 0000:00:03.0 (1af4 1001): Skipping denied controller at 0000:00:03.0 00:09:28.654 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:09:28.654 0000:00:12.0 (1b36 0010): Skipping denied controller at 0000:00:12.0 00:09:28.915 0000:00:13.0 (1b36 0010): Skipping denied controller at 0000:00:13.0 00:09:29.176 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.176 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:09:29.176 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@143 -- # xtrace_disable 00:09:29.176 02:04:53 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:09:29.176 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@148 -- # run_hotplug 00:09:29.176 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@77 -- # trap 'killprocess $hotplug_pid; exit 1' SIGINT SIGTERM EXIT 00:09:29.176 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@85 -- # hotplug_pid=68463 00:09:29.176 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@87 -- # debug_remove_attach_helper 3 6 false 00:09:29.177 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:09:29.177 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/examples/hotplug -i 0 -t 0 -n 6 -r 6 -l warning 00:09:29.438 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 false 00:09:29.438 02:04:53 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:09:29.438 02:04:53 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:09:29.438 02:04:53 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:09:29.438 02:04:53 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:09:29.438 02:04:53 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 false 00:09:29.438 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:09:29.438 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:09:29.438 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=false 00:09:29.438 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:09:29.438 02:04:53 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:09:29.438 Initializing NVMe Controllers 00:09:29.438 Attaching to 0000:00:10.0 00:09:29.438 Attaching to 0000:00:11.0 00:09:29.438 Attached to 0000:00:11.0 00:09:29.438 Attached to 0000:00:10.0 00:09:29.438 Initialization complete. Starting I/O... 00:09:29.438 QEMU NVMe Ctrl (12341 ): 0 I/Os completed (+0) 00:09:29.438 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:29.438 00:09:30.381 QEMU NVMe Ctrl (12341 ): 2372 I/Os completed (+2372) 00:09:30.381 QEMU NVMe Ctrl (12340 ): 2373 I/Os completed (+2373) 00:09:30.381 00:09:31.766 QEMU NVMe Ctrl (12341 ): 5680 I/Os completed (+3308) 00:09:31.766 QEMU NVMe Ctrl (12340 ): 5681 I/Os completed (+3308) 00:09:31.766 00:09:32.698 QEMU NVMe Ctrl (12341 ): 9439 I/Os completed (+3759) 00:09:32.698 QEMU NVMe Ctrl (12340 ): 9444 I/Os completed (+3763) 00:09:32.698 00:09:33.633 QEMU NVMe Ctrl (12341 ): 13208 I/Os completed (+3769) 00:09:33.633 QEMU NVMe Ctrl (12340 ): 13231 I/Os completed (+3787) 00:09:33.633 00:09:34.566 QEMU NVMe Ctrl (12341 ): 16954 I/Os completed (+3746) 00:09:34.566 QEMU NVMe Ctrl (12340 ): 16988 I/Os completed (+3757) 00:09:34.566 00:09:35.502 02:04:59 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:35.502 02:04:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:35.502 02:04:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:35.502 [2024-12-15 02:04:59.948210] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:35.502 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:35.502 [2024-12-15 02:04:59.949247] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.949293] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.949307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.949322] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:35.502 [2024-12-15 02:04:59.950694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.950733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.950745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.950758] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 02:04:59 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:35.502 02:04:59 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:35.502 [2024-12-15 02:04:59.971692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:35.502 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:35.502 [2024-12-15 02:04:59.972595] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.972694] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.972731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.972784] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:35.502 [2024-12-15 02:04:59.974159] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.974184] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.974208] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 [2024-12-15 02:04:59.974218] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:35.502 02:04:59 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:35.502 02:04:59 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:35.502 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:35.502 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:35.502 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:35.502 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:35.502 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:35.502 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:35.502 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:35.503 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:35.503 Attaching to 0000:00:10.0 00:09:35.503 Attached to 0000:00:10.0 00:09:35.503 QEMU NVMe Ctrl (12340 ): 0 I/Os completed (+0) 00:09:35.503 00:09:35.503 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:35.503 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:35.503 02:05:00 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:35.503 Attaching to 0000:00:11.0 00:09:35.503 Attached to 0000:00:11.0 00:09:36.446 QEMU NVMe Ctrl (12340 ): 3318 I/Os completed (+3318) 00:09:36.446 QEMU NVMe Ctrl (12341 ): 3020 I/Os completed (+3020) 00:09:36.446 00:09:37.388 QEMU NVMe Ctrl (12340 ): 6569 I/Os completed (+3251) 00:09:37.388 QEMU NVMe Ctrl (12341 ): 6266 I/Os completed (+3246) 00:09:37.388 00:09:38.765 QEMU NVMe Ctrl (12340 ): 10180 I/Os completed (+3611) 00:09:38.765 QEMU NVMe Ctrl (12341 ): 9893 I/Os completed (+3627) 00:09:38.765 00:09:39.704 QEMU NVMe Ctrl (12340 ): 13600 I/Os completed (+3420) 00:09:39.704 QEMU NVMe Ctrl (12341 ): 13321 I/Os completed (+3428) 00:09:39.704 00:09:40.647 QEMU NVMe Ctrl (12340 ): 16888 I/Os completed (+3288) 00:09:40.647 QEMU NVMe Ctrl (12341 ): 16609 I/Os completed (+3288) 00:09:40.647 00:09:41.582 QEMU NVMe Ctrl (12340 ): 20364 I/Os completed (+3476) 00:09:41.582 QEMU NVMe Ctrl (12341 ): 20088 I/Os completed (+3479) 00:09:41.582 00:09:42.516 QEMU NVMe Ctrl (12340 ): 24109 I/Os completed (+3745) 00:09:42.516 QEMU NVMe Ctrl (12341 ): 23838 I/Os completed (+3750) 00:09:42.516 00:09:43.455 QEMU NVMe Ctrl (12340 ): 27326 I/Os completed (+3217) 00:09:43.455 QEMU NVMe Ctrl (12341 ): 27135 I/Os completed (+3297) 00:09:43.455 00:09:44.398 QEMU NVMe Ctrl (12340 ): 29974 I/Os completed (+2648) 00:09:44.398 QEMU NVMe Ctrl (12341 ): 29794 I/Os completed (+2659) 00:09:44.398 00:09:45.782 QEMU NVMe Ctrl (12340 ): 32730 I/Os completed (+2756) 00:09:45.783 QEMU NVMe Ctrl (12341 ): 32550 I/Os completed (+2756) 00:09:45.783 00:09:46.724 QEMU NVMe Ctrl (12340 ): 35570 I/Os completed (+2840) 00:09:46.724 QEMU NVMe Ctrl (12341 ): 35392 I/Os completed (+2842) 00:09:46.724 00:09:47.660 QEMU NVMe Ctrl (12340 ): 38855 I/Os completed (+3285) 00:09:47.660 QEMU NVMe Ctrl (12341 ): 38672 I/Os completed (+3280) 00:09:47.660 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.660 [2024-12-15 02:05:12.223363] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:47.660 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:47.660 [2024-12-15 02:05:12.224348] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.224461] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.224523] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.224550] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:47.660 [2024-12-15 02:05:12.226120] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.226223] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.226252] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.226307] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:47.660 [2024-12-15 02:05:12.241692] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:47.660 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:47.660 [2024-12-15 02:05:12.242657] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.242741] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.242803] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.242828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:47.660 [2024-12-15 02:05:12.244225] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.244273] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.244298] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 [2024-12-15 02:05:12.244321] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:47.660 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:47.660 EAL: Scan for (pci) bus failed. 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:47.660 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:47.660 Attaching to 0000:00:10.0 00:09:47.660 Attached to 0000:00:10.0 00:09:47.922 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:09:47.922 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:47.922 02:05:12 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:09:47.922 Attaching to 0000:00:11.0 00:09:47.922 Attached to 0000:00:11.0 00:09:48.495 QEMU NVMe Ctrl (12340 ): 2069 I/Os completed (+2069) 00:09:48.495 QEMU NVMe Ctrl (12341 ): 1843 I/Os completed (+1843) 00:09:48.495 00:09:49.439 QEMU NVMe Ctrl (12340 ): 4745 I/Os completed (+2676) 00:09:49.439 QEMU NVMe Ctrl (12341 ): 4519 I/Os completed (+2676) 00:09:49.439 00:09:50.822 QEMU NVMe Ctrl (12340 ): 7445 I/Os completed (+2700) 00:09:50.822 QEMU NVMe Ctrl (12341 ): 7230 I/Os completed (+2711) 00:09:50.822 00:09:51.393 QEMU NVMe Ctrl (12340 ): 10142 I/Os completed (+2697) 00:09:51.393 QEMU NVMe Ctrl (12341 ): 9933 I/Os completed (+2703) 00:09:51.393 00:09:52.781 QEMU NVMe Ctrl (12340 ): 12938 I/Os completed (+2796) 00:09:52.781 QEMU NVMe Ctrl (12341 ): 12729 I/Os completed (+2796) 00:09:52.781 00:09:53.726 QEMU NVMe Ctrl (12340 ): 15762 I/Os completed (+2824) 00:09:53.726 QEMU NVMe Ctrl (12341 ): 15553 I/Os completed (+2824) 00:09:53.726 00:09:54.695 QEMU NVMe Ctrl (12340 ): 18562 I/Os completed (+2800) 00:09:54.695 QEMU NVMe Ctrl (12341 ): 18353 I/Os completed (+2800) 00:09:54.695 00:09:55.640 QEMU NVMe Ctrl (12340 ): 22094 I/Os completed (+3532) 00:09:55.640 QEMU NVMe Ctrl (12341 ): 21873 I/Os completed (+3520) 00:09:55.640 00:09:56.574 QEMU NVMe Ctrl (12340 ): 25854 I/Os completed (+3760) 00:09:56.574 QEMU NVMe Ctrl (12341 ): 25632 I/Os completed (+3759) 00:09:56.574 00:09:57.509 QEMU NVMe Ctrl (12340 ): 29614 I/Os completed (+3760) 00:09:57.509 QEMU NVMe Ctrl (12341 ): 29385 I/Os completed (+3753) 00:09:57.509 00:09:58.444 QEMU NVMe Ctrl (12340 ): 33386 I/Os completed (+3772) 00:09:58.444 QEMU NVMe Ctrl (12341 ): 33173 I/Os completed (+3788) 00:09:58.444 00:09:59.389 QEMU NVMe Ctrl (12340 ): 36517 I/Os completed (+3131) 00:09:59.389 QEMU NVMe Ctrl (12341 ): 36332 I/Os completed (+3159) 00:09:59.389 00:09:59.956 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:09:59.956 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:09:59.956 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.956 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.956 [2024-12-15 02:05:24.486117] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:09:59.956 Controller removed: QEMU NVMe Ctrl (12340 ) 00:09:59.956 [2024-12-15 02:05:24.487065] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.487108] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.487122] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.487138] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:09:59.956 [2024-12-15 02:05:24.488727] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.488760] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.488771] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.488783] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:10.0/vendor 00:09:59.956 EAL: Scan for (pci) bus failed. 00:09:59.956 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:09:59.956 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:09:59.956 [2024-12-15 02:05:24.508687] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:09:59.956 Controller removed: QEMU NVMe Ctrl (12341 ) 00:09:59.956 [2024-12-15 02:05:24.509751] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.509794] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.509813] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.509828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:09:59.956 [2024-12-15 02:05:24.511457] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.511489] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.511506] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 [2024-12-15 02:05:24.511518] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:09:59.956 EAL: eal_parse_sysfs_value(): cannot open sysfs value /sys/bus/pci/devices/0000:00:11.0/vendor 00:09:59.956 EAL: Scan for (pci) bus failed. 00:09:59.956 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # false 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:09:59.957 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:09:59.957 Attaching to 0000:00:10.0 00:09:59.957 Attached to 0000:00:10.0 00:10:00.216 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:00.216 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:00.216 02:05:24 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:00.216 Attaching to 0000:00:11.0 00:10:00.216 Attached to 0000:00:11.0 00:10:00.216 unregister_dev: QEMU NVMe Ctrl (12340 ) 00:10:00.216 unregister_dev: QEMU NVMe Ctrl (12341 ) 00:10:00.216 [2024-12-15 02:05:24.741558] rpc.c: 409:spdk_rpc_close: *WARNING*: spdk_rpc_close: deprecated feature spdk_rpc_close is deprecated to be removed in v24.09 00:10:12.450 02:05:36 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # false 00:10:12.450 02:05:36 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:12.450 02:05:36 sw_hotplug -- common/autotest_common.sh@719 -- # time=42.79 00:10:12.450 02:05:36 sw_hotplug -- common/autotest_common.sh@720 -- # echo 42.79 00:10:12.450 02:05:36 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:10:12.450 02:05:36 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=42.79 00:10:12.450 02:05:36 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 42.79 2 00:10:12.450 remove_attach_helper took 42.79s to complete (handling 2 nvme drive(s)) 02:05:36 sw_hotplug -- nvme/sw_hotplug.sh@91 -- # sleep 6 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@93 -- # kill -0 68463 00:10:19.038 /home/vagrant/spdk_repo/spdk/test/nvme/sw_hotplug.sh: line 93: kill: (68463) - No such process 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@95 -- # wait 68463 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@102 -- # trap - SIGINT SIGTERM EXIT 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@151 -- # tgt_run_hotplug 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@107 -- # local dev 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@110 -- # spdk_tgt_pid=69013 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@112 -- # trap 'killprocess ${spdk_tgt_pid}; echo 1 > /sys/bus/pci/rescan; exit 1' SIGINT SIGTERM EXIT 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@113 -- # waitforlisten 69013 00:10:19.038 02:05:42 sw_hotplug -- common/autotest_common.sh@835 -- # '[' -z 69013 ']' 00:10:19.038 02:05:42 sw_hotplug -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:10:19.038 02:05:42 sw_hotplug -- common/autotest_common.sh@840 -- # local max_retries=100 00:10:19.038 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:10:19.038 02:05:42 sw_hotplug -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:10:19.038 02:05:42 sw_hotplug -- common/autotest_common.sh@844 -- # xtrace_disable 00:10:19.038 02:05:42 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.038 02:05:42 sw_hotplug -- nvme/sw_hotplug.sh@109 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:10:19.038 [2024-12-15 02:05:42.836943] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:10:19.038 [2024-12-15 02:05:42.837095] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid69013 ] 00:10:19.038 [2024-12-15 02:05:43.000985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:10:19.038 [2024-12-15 02:05:43.125503] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@868 -- # return 0 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@115 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@117 -- # debug_remove_attach_helper 3 6 true 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:10:19.300 02:05:43 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:10:19.300 02:05:43 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.931 02:05:49 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.931 02:05:49 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.931 02:05:49 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:25.931 02:05:49 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:25.931 [2024-12-15 02:05:49.936181] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:25.931 [2024-12-15 02:05:49.937477] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.931 [2024-12-15 02:05:49.937514] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.931 [2024-12-15 02:05:49.937527] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.931 [2024-12-15 02:05:49.937545] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.931 [2024-12-15 02:05:49.937553] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.931 [2024-12-15 02:05:49.937561] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.931 [2024-12-15 02:05:49.937567] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.931 [2024-12-15 02:05:49.937575] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.931 [2024-12-15 02:05:49.937581] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.931 [2024-12-15 02:05:49.937592] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.931 [2024-12-15 02:05:49.937599] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.931 [2024-12-15 02:05:49.937607] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.931 [2024-12-15 02:05:50.336178] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:25.931 [2024-12-15 02:05:50.337537] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.931 [2024-12-15 02:05:50.337573] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.931 [2024-12-15 02:05:50.337586] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.931 [2024-12-15 02:05:50.337601] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.931 [2024-12-15 02:05:50.337610] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.931 [2024-12-15 02:05:50.337617] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.931 [2024-12-15 02:05:50.337626] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.931 [2024-12-15 02:05:50.337633] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.931 [2024-12-15 02:05:50.337641] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.931 [2024-12-15 02:05:50.337647] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:25.931 [2024-12-15 02:05:50.337655] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:25.931 [2024-12-15 02:05:50.337661] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:25.931 02:05:50 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:25.931 02:05:50 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:25.931 02:05:50 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:25.931 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:26.190 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:26.190 02:05:50 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.390 02:06:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.390 02:06:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.390 02:06:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.390 02:06:02 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.390 02:06:02 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.390 02:06:02 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:10:38.390 02:06:02 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:38.390 [2024-12-15 02:06:02.836366] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:38.390 [2024-12-15 02:06:02.837600] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.390 [2024-12-15 02:06:02.837638] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.390 [2024-12-15 02:06:02.837649] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.390 [2024-12-15 02:06:02.837666] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.390 [2024-12-15 02:06:02.837672] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.390 [2024-12-15 02:06:02.837681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.390 [2024-12-15 02:06:02.837688] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.390 [2024-12-15 02:06:02.837696] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.390 [2024-12-15 02:06:02.837703] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.390 [2024-12-15 02:06:02.837711] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.390 [2024-12-15 02:06:02.837717] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.390 [2024-12-15 02:06:02.837725] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.649 [2024-12-15 02:06:03.236371] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:38.649 [2024-12-15 02:06:03.237631] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.649 [2024-12-15 02:06:03.237668] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.649 [2024-12-15 02:06:03.237681] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.649 [2024-12-15 02:06:03.237697] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.649 [2024-12-15 02:06:03.237706] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.649 [2024-12-15 02:06:03.237713] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.649 [2024-12-15 02:06:03.237722] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.649 [2024-12-15 02:06:03.237729] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.649 [2024-12-15 02:06:03.237738] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.649 [2024-12-15 02:06:03.237745] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:38.649 [2024-12-15 02:06:03.237753] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:38.649 [2024-12-15 02:06:03.237759] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:38.649 02:06:03 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:38.649 02:06:03 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:38.649 02:06:03 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.649 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:38.908 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:38.908 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.908 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:38.908 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:38.908 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:38.908 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:38.908 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:38.908 02:06:03 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.120 02:06:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.120 02:06:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.120 02:06:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.120 [2024-12-15 02:06:15.636578] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.120 [2024-12-15 02:06:15.637763] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.120 [2024-12-15 02:06:15.637800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.120 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.120 [2024-12-15 02:06:15.637812] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.120 [2024-12-15 02:06:15.637829] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.121 [2024-12-15 02:06:15.637837] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.121 [2024-12-15 02:06:15.637847] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.121 [2024-12-15 02:06:15.637853] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.121 [2024-12-15 02:06:15.637861] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.121 [2024-12-15 02:06:15.637867] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.121 [2024-12-15 02:06:15.637875] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.121 [2024-12-15 02:06:15.637882] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.121 [2024-12-15 02:06:15.637890] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.121 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.121 02:06:15 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.121 02:06:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.121 02:06:15 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.121 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:51.121 02:06:15 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.687 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:51.688 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:51.688 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:51.688 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:51.688 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:51.688 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:51.688 02:06:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:51.688 02:06:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:51.688 02:06:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:51.688 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:10:51.688 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:10:51.688 [2024-12-15 02:06:16.336582] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:10:51.688 [2024-12-15 02:06:16.337733] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.688 [2024-12-15 02:06:16.337766] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.688 [2024-12-15 02:06:16.337777] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.688 [2024-12-15 02:06:16.337791] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.688 [2024-12-15 02:06:16.337800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.688 [2024-12-15 02:06:16.337807] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.688 [2024-12-15 02:06:16.337817] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.688 [2024-12-15 02:06:16.337823] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.688 [2024-12-15 02:06:16.337832] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:51.688 [2024-12-15 02:06:16.337839] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:10:51.688 [2024-12-15 02:06:16.337847] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:10:51.688 [2024-12-15 02:06:16.337854] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:10:52.253 02:06:16 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:10:52.253 02:06:16 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:10:52.253 02:06:16 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:10:52.253 02:06:16 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:04.454 02:06:28 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:04.454 02:06:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:04.454 02:06:28 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:04.454 02:06:28 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:04.454 02:06:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:04.454 02:06:28 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:04.454 02:06:28 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.454 02:06:28 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.454 02:06:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@719 -- # time=45.17 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@720 -- # echo 45.17 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=45.17 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 45.17 2 00:11:04.455 remove_attach_helper took 45.17s to complete (handling 2 nvme drive(s)) 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@119 -- # rpc_cmd bdev_nvme_set_hotplug -d 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@120 -- # rpc_cmd bdev_nvme_set_hotplug -e 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@122 -- # debug_remove_attach_helper 3 6 true 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@19 -- # local helper_time=0 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # timing_cmd remove_attach_helper 3 6 true 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@709 -- # local cmd_es=0 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@711 -- # [[ -t 0 ]] 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@711 -- # exec 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@713 -- # local time=0 TIMEFORMAT=%2R 00:11:04.455 02:06:29 sw_hotplug -- common/autotest_common.sh@719 -- # remove_attach_helper 3 6 true 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@27 -- # local hotplug_events=3 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@28 -- # local hotplug_wait=6 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@29 -- # local use_bdev=true 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@30 -- # local dev bdfs 00:11:04.455 02:06:29 sw_hotplug -- nvme/sw_hotplug.sh@36 -- # sleep 6 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.015 02:06:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.015 02:06:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.015 02:06:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:11.015 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:11.015 [2024-12-15 02:06:35.138435] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:11.015 [2024-12-15 02:06:35.139344] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.015 [2024-12-15 02:06:35.139381] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.015 [2024-12-15 02:06:35.139391] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.015 [2024-12-15 02:06:35.139408] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.015 [2024-12-15 02:06:35.139416] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.015 [2024-12-15 02:06:35.139424] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.015 [2024-12-15 02:06:35.139432] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.015 [2024-12-15 02:06:35.139440] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.016 [2024-12-15 02:06:35.139446] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.016 [2024-12-15 02:06:35.139454] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.016 [2024-12-15 02:06:35.139461] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.016 [2024-12-15 02:06:35.139470] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.016 [2024-12-15 02:06:35.538427] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:11.016 [2024-12-15 02:06:35.539413] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.016 [2024-12-15 02:06:35.539444] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.016 [2024-12-15 02:06:35.539455] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.016 [2024-12-15 02:06:35.539466] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.016 [2024-12-15 02:06:35.539476] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.016 [2024-12-15 02:06:35.539483] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.016 [2024-12-15 02:06:35.539491] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.016 [2024-12-15 02:06:35.539497] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.016 [2024-12-15 02:06:35.539505] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.016 [2024-12-15 02:06:35.539512] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:11.016 [2024-12-15 02:06:35.539519] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:11.016 [2024-12-15 02:06:35.539526] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:11.016 02:06:35 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:11.016 02:06:35 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:11.016 02:06:35 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.016 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:11.274 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:11.274 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.274 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:11.274 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:11.274 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:11.274 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:11.274 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:11.274 02:06:35 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.473 02:06:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.473 02:06:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.473 02:06:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.473 [2024-12-15 02:06:47.938621] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:23.473 [2024-12-15 02:06:47.939646] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.473 [2024-12-15 02:06:47.939685] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.473 [2024-12-15 02:06:47.939696] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.473 [2024-12-15 02:06:47.939712] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.473 [2024-12-15 02:06:47.939720] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.473 [2024-12-15 02:06:47.939728] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.473 [2024-12-15 02:06:47.939735] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.473 [2024-12-15 02:06:47.939742] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.473 [2024-12-15 02:06:47.939749] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.473 [2024-12-15 02:06:47.939756] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.473 [2024-12-15 02:06:47.939763] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.473 [2024-12-15 02:06:47.939784] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.473 02:06:47 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.473 02:06:47 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.473 02:06:47 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 1 > 0 )) 00:11:23.473 02:06:47 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:23.731 [2024-12-15 02:06:48.338620] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:23.731 [2024-12-15 02:06:48.339503] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.731 [2024-12-15 02:06:48.339534] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.731 [2024-12-15 02:06:48.339545] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.731 [2024-12-15 02:06:48.339559] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.731 [2024-12-15 02:06:48.339569] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.731 [2024-12-15 02:06:48.339576] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.731 [2024-12-15 02:06:48.339585] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.731 [2024-12-15 02:06:48.339591] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.732 [2024-12-15 02:06:48.339599] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.732 [2024-12-15 02:06:48.339606] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:23.732 [2024-12-15 02:06:48.339614] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:23.732 [2024-12-15 02:06:48.339620] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:23.732 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:11.0 00:11:23.732 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:23.732 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:23.732 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:23.732 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:23.732 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:23.732 02:06:48 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:23.732 02:06:48 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:23.990 02:06:48 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:23.990 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:24.248 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:24.248 02:06:48 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.456 02:07:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.456 02:07:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.456 02:07:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@39 -- # for dev in "${nvmes[@]}" 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@40 -- # echo 1 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@43 -- # true 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.456 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.456 02:07:00 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.456 02:07:00 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.456 [2024-12-15 02:07:00.838823] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:10.0, 0] in failed state. 00:11:36.456 [2024-12-15 02:07:00.839731] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.456 [2024-12-15 02:07:00.839764] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.457 [2024-12-15 02:07:00.839775] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.457 [2024-12-15 02:07:00.839793] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.457 [2024-12-15 02:07:00.839800] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.457 [2024-12-15 02:07:00.839810] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.457 [2024-12-15 02:07:00.839818] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.457 [2024-12-15 02:07:00.839828] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.457 [2024-12-15 02:07:00.839834] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.457 [2024-12-15 02:07:00.839842] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.457 [2024-12-15 02:07:00.839849] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.457 [2024-12-15 02:07:00.839858] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.457 02:07:00 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.457 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 2 > 0 )) 00:11:36.457 02:07:00 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # sleep 0.5 00:11:36.715 [2024-12-15 02:07:01.238825] nvme_ctrlr.c:1110:nvme_ctrlr_fail: *ERROR*: [0000:00:11.0, 0] in failed state. 00:11:36.715 [2024-12-15 02:07:01.239728] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.715 [2024-12-15 02:07:01.239759] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:190 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.715 [2024-12-15 02:07:01.239770] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:190 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.715 [2024-12-15 02:07:01.239781] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.715 [2024-12-15 02:07:01.239790] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:189 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.715 [2024-12-15 02:07:01.239797] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:189 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.715 [2024-12-15 02:07:01.239806] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.715 [2024-12-15 02:07:01.239813] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:188 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.715 [2024-12-15 02:07:01.239820] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:188 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.715 [2024-12-15 02:07:01.239828] nvme_pcie_common.c: 782:nvme_pcie_qpair_abort_trackers: *ERROR*: aborting outstanding command 00:11:36.715 [2024-12-15 02:07:01.239838] nvme_qpair.c: 223:nvme_admin_qpair_print_command: *NOTICE*: ASYNC EVENT REQUEST (0c) qid:0 cid:187 nsid:0 cdw10:00000000 cdw11:00000000 00:11:36.715 [2024-12-15 02:07:01.239844] nvme_qpair.c: 474:spdk_nvme_print_completion: *NOTICE*: ABORTED - BY REQUEST (00/07) qid:0 cid:187 cdw0:0 sqhd:0000 p:0 m:0 dnr:0 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@51 -- # printf 'Still waiting for %s to be gone\n' 0000:00:10.0 0000:00:11.0 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdfs=($(bdev_bdfs)) 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # bdev_bdfs 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:36.715 02:07:01 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:36.715 02:07:01 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:36.715 02:07:01 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@50 -- # (( 0 > 0 )) 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@56 -- # echo 1 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.715 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:10.0 00:11:36.973 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:10.0 00:11:36.973 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.973 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@58 -- # for dev in "${nvmes[@]}" 00:11:36.973 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@59 -- # echo uio_pci_generic 00:11:36.973 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@60 -- # echo 0000:00:11.0 00:11:36.973 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@61 -- # echo 0000:00:11.0 00:11:36.973 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@62 -- # echo '' 00:11:36.973 02:07:01 sw_hotplug -- nvme/sw_hotplug.sh@66 -- # sleep 12 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@68 -- # true 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdfs=($(bdev_bdfs)) 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@70 -- # bdev_bdfs 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@13 -- # sort -u 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # jq -r '.[].driver_specific.nvme[].pci_address' /dev/fd/63 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@12 -- # rpc_cmd bdev_get_bdevs 00:11:49.180 02:07:13 sw_hotplug -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:49.180 02:07:13 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:49.180 02:07:13 sw_hotplug -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@71 -- # [[ 0000:00:10.0 0000:00:11.0 == \0\0\0\0\:\0\0\:\1\0\.\0\ \0\0\0\0\:\0\0\:\1\1\.\0 ]] 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@38 -- # (( hotplug_events-- )) 00:11:49.180 02:07:13 sw_hotplug -- common/autotest_common.sh@719 -- # time=44.62 00:11:49.180 02:07:13 sw_hotplug -- common/autotest_common.sh@720 -- # echo 44.62 00:11:49.180 02:07:13 sw_hotplug -- common/autotest_common.sh@722 -- # return 0 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@21 -- # helper_time=44.62 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@22 -- # printf 'remove_attach_helper took %ss to complete (handling %u nvme drive(s))' 44.62 2 00:11:49.180 remove_attach_helper took 44.62s to complete (handling 2 nvme drive(s)) 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@124 -- # trap - SIGINT SIGTERM EXIT 00:11:49.180 02:07:13 sw_hotplug -- nvme/sw_hotplug.sh@125 -- # killprocess 69013 00:11:49.180 02:07:13 sw_hotplug -- common/autotest_common.sh@954 -- # '[' -z 69013 ']' 00:11:49.180 02:07:13 sw_hotplug -- common/autotest_common.sh@958 -- # kill -0 69013 00:11:49.181 02:07:13 sw_hotplug -- common/autotest_common.sh@959 -- # uname 00:11:49.181 02:07:13 sw_hotplug -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:49.181 02:07:13 sw_hotplug -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 69013 00:11:49.181 02:07:13 sw_hotplug -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:49.181 02:07:13 sw_hotplug -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:49.181 killing process with pid 69013 00:11:49.181 02:07:13 sw_hotplug -- common/autotest_common.sh@972 -- # echo 'killing process with pid 69013' 00:11:49.181 02:07:13 sw_hotplug -- common/autotest_common.sh@973 -- # kill 69013 00:11:49.181 02:07:13 sw_hotplug -- common/autotest_common.sh@978 -- # wait 69013 00:11:50.562 02:07:14 sw_hotplug -- nvme/sw_hotplug.sh@154 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:11:50.562 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:51.135 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:51.135 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:11:51.135 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:11:51.135 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:11:51.135 00:11:51.135 real 2m29.373s 00:11:51.135 user 1m50.883s 00:11:51.135 sys 0m16.894s 00:11:51.135 02:07:15 sw_hotplug -- common/autotest_common.sh@1130 -- # xtrace_disable 00:11:51.135 ************************************ 00:11:51.135 END TEST sw_hotplug 00:11:51.135 ************************************ 00:11:51.135 02:07:15 sw_hotplug -- common/autotest_common.sh@10 -- # set +x 00:11:51.135 02:07:15 -- spdk/autotest.sh@243 -- # [[ 1 -eq 1 ]] 00:11:51.135 02:07:15 -- spdk/autotest.sh@244 -- # run_test nvme_xnvme /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:51.135 02:07:15 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:51.135 02:07:15 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:51.135 02:07:15 -- common/autotest_common.sh@10 -- # set +x 00:11:51.135 ************************************ 00:11:51.135 START TEST nvme_xnvme 00:11:51.135 ************************************ 00:11:51.135 02:07:15 nvme_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/nvme/xnvme/xnvme.sh 00:11:51.400 * Looking for test storage... 00:11:51.400 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.400 02:07:15 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.400 02:07:15 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.400 02:07:15 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.400 02:07:16 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.400 --rc genhtml_branch_coverage=1 00:11:51.400 --rc genhtml_function_coverage=1 00:11:51.400 --rc genhtml_legend=1 00:11:51.400 --rc geninfo_all_blocks=1 00:11:51.400 --rc geninfo_unexecuted_blocks=1 00:11:51.400 00:11:51.400 ' 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.400 --rc genhtml_branch_coverage=1 00:11:51.400 --rc genhtml_function_coverage=1 00:11:51.400 --rc genhtml_legend=1 00:11:51.400 --rc geninfo_all_blocks=1 00:11:51.400 --rc geninfo_unexecuted_blocks=1 00:11:51.400 00:11:51.400 ' 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.400 --rc genhtml_branch_coverage=1 00:11:51.400 --rc genhtml_function_coverage=1 00:11:51.400 --rc genhtml_legend=1 00:11:51.400 --rc geninfo_all_blocks=1 00:11:51.400 --rc geninfo_unexecuted_blocks=1 00:11:51.400 00:11:51.400 ' 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.400 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.400 --rc genhtml_branch_coverage=1 00:11:51.400 --rc genhtml_function_coverage=1 00:11:51.400 --rc genhtml_legend=1 00:11:51.400 --rc geninfo_all_blocks=1 00:11:51.400 --rc geninfo_unexecuted_blocks=1 00:11:51.400 00:11:51.400 ' 00:11:51.400 02:07:16 nvme_xnvme -- xnvme/common.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/dd/common.sh 00:11:51.400 02:07:16 nvme_xnvme -- dd/common.sh@6 -- # source /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@7 -- # rpc_py=rpc_cmd 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@34 -- # set -e 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@35 -- # shopt -s nullglob 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@36 -- # shopt -s extglob 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@37 -- # shopt -s inherit_errexit 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@39 -- # '[' -z /home/vagrant/spdk_repo/spdk/../output ']' 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@44 -- # [[ -e /home/vagrant/spdk_repo/spdk/test/common/build_config.sh ]] 00:11:51.400 02:07:16 nvme_xnvme -- common/autotest_common.sh@45 -- # source /home/vagrant/spdk_repo/spdk/test/common/build_config.sh 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@1 -- # CONFIG_WPDK_DIR= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@2 -- # CONFIG_ASAN=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@3 -- # CONFIG_VBDEV_COMPRESS=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@4 -- # CONFIG_HAVE_EXECINFO_H=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@5 -- # CONFIG_USDT=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@6 -- # CONFIG_CUSTOMOCF=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@7 -- # CONFIG_PREFIX=/usr/local 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@8 -- # CONFIG_RBD=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@9 -- # CONFIG_LIBDIR= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@10 -- # CONFIG_IDXD=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@11 -- # CONFIG_NVME_CUSE=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@12 -- # CONFIG_SMA=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@13 -- # CONFIG_VTUNE=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@14 -- # CONFIG_TSAN=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@15 -- # CONFIG_RDMA_SEND_WITH_INVAL=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@16 -- # CONFIG_VFIO_USER_DIR= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@17 -- # CONFIG_MAX_NUMA_NODES=1 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@18 -- # CONFIG_PGO_CAPTURE=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@19 -- # CONFIG_HAVE_UUID_GENERATE_SHA1=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@20 -- # CONFIG_ENV=/home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@21 -- # CONFIG_LTO=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@22 -- # CONFIG_ISCSI_INITIATOR=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@23 -- # CONFIG_CET=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@24 -- # CONFIG_VBDEV_COMPRESS_MLX5=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@25 -- # CONFIG_OCF_PATH= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@26 -- # CONFIG_RDMA_SET_TOS=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@27 -- # CONFIG_AIO_FSDEV=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@28 -- # CONFIG_HAVE_ARC4RANDOM=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@29 -- # CONFIG_HAVE_LIBARCHIVE=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@30 -- # CONFIG_UBLK=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@31 -- # CONFIG_ISAL_CRYPTO=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@32 -- # CONFIG_OPENSSL_PATH= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@33 -- # CONFIG_OCF=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@34 -- # CONFIG_FUSE=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@35 -- # CONFIG_VTUNE_DIR= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@36 -- # CONFIG_FUZZER_LIB= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@37 -- # CONFIG_FUZZER=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@38 -- # CONFIG_FSDEV=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@39 -- # CONFIG_DPDK_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@40 -- # CONFIG_CRYPTO=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@41 -- # CONFIG_PGO_USE=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@42 -- # CONFIG_VHOST=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@43 -- # CONFIG_DAOS=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@44 -- # CONFIG_DPDK_INC_DIR= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@45 -- # CONFIG_DAOS_DIR= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@46 -- # CONFIG_UNIT_TESTS=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@47 -- # CONFIG_RDMA_SET_ACK_TIMEOUT=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@48 -- # CONFIG_VIRTIO=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@49 -- # CONFIG_DPDK_UADK=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@50 -- # CONFIG_COVERAGE=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@51 -- # CONFIG_RDMA=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@52 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIM=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@53 -- # CONFIG_HAVE_LZ4=n 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@54 -- # CONFIG_FIO_SOURCE_DIR=/usr/src/fio 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@55 -- # CONFIG_URING_PATH= 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@56 -- # CONFIG_XNVME=y 00:11:51.400 02:07:16 nvme_xnvme -- common/build_config.sh@57 -- # CONFIG_VFIO_USER=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@58 -- # CONFIG_ARCH=native 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@59 -- # CONFIG_HAVE_EVP_MAC=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@60 -- # CONFIG_URING_ZNS=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@61 -- # CONFIG_WERROR=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@62 -- # CONFIG_HAVE_LIBBSD=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@63 -- # CONFIG_UBSAN=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@64 -- # CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@65 -- # CONFIG_IPSEC_MB_DIR= 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@66 -- # CONFIG_GOLANG=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@67 -- # CONFIG_ISAL=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@68 -- # CONFIG_IDXD_KERNEL=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@69 -- # CONFIG_DPDK_LIB_DIR= 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@70 -- # CONFIG_RDMA_PROV=verbs 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@71 -- # CONFIG_APPS=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@72 -- # CONFIG_SHARED=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@73 -- # CONFIG_HAVE_KEYUTILS=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@74 -- # CONFIG_FC_PATH= 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@75 -- # CONFIG_DPDK_PKG_CONFIG=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@76 -- # CONFIG_FC=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@77 -- # CONFIG_AVAHI=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@78 -- # CONFIG_FIO_PLUGIN=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@79 -- # CONFIG_RAID5F=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@80 -- # CONFIG_EXAMPLES=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@81 -- # CONFIG_TESTS=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@82 -- # CONFIG_CRYPTO_MLX5=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@83 -- # CONFIG_MAX_LCORES=128 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@84 -- # CONFIG_IPSEC_MB=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@85 -- # CONFIG_PGO_DIR= 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@86 -- # CONFIG_DEBUG=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@87 -- # CONFIG_DPDK_COMPRESSDEV=n 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@88 -- # CONFIG_CROSS_PREFIX= 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@89 -- # CONFIG_COPY_FILE_RANGE=y 00:11:51.401 02:07:16 nvme_xnvme -- common/build_config.sh@90 -- # CONFIG_URING=n 00:11:51.401 02:07:16 nvme_xnvme -- common/autotest_common.sh@54 -- # source /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/common/applications.sh 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/common 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@8 -- # _root=/home/vagrant/spdk_repo/spdk/test/common 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@9 -- # _root=/home/vagrant/spdk_repo/spdk 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@10 -- # _app_dir=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@11 -- # _test_app_dir=/home/vagrant/spdk_repo/spdk/test/app 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@12 -- # _examples_dir=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@14 -- # VHOST_FUZZ_APP=("$_test_app_dir/fuzz/vhost_fuzz/vhost_fuzz") 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@15 -- # ISCSI_APP=("$_app_dir/iscsi_tgt") 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@16 -- # NVMF_APP=("$_app_dir/nvmf_tgt") 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@17 -- # VHOST_APP=("$_app_dir/vhost") 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@18 -- # DD_APP=("$_app_dir/spdk_dd") 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@19 -- # SPDK_APP=("$_app_dir/spdk_tgt") 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@22 -- # [[ -e /home/vagrant/spdk_repo/spdk/include/spdk/config.h ]] 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@23 -- # [[ #ifndef SPDK_CONFIG_H 00:11:51.401 #define SPDK_CONFIG_H 00:11:51.401 #define SPDK_CONFIG_AIO_FSDEV 1 00:11:51.401 #define SPDK_CONFIG_APPS 1 00:11:51.401 #define SPDK_CONFIG_ARCH native 00:11:51.401 #define SPDK_CONFIG_ASAN 1 00:11:51.401 #undef SPDK_CONFIG_AVAHI 00:11:51.401 #undef SPDK_CONFIG_CET 00:11:51.401 #define SPDK_CONFIG_COPY_FILE_RANGE 1 00:11:51.401 #define SPDK_CONFIG_COVERAGE 1 00:11:51.401 #define SPDK_CONFIG_CROSS_PREFIX 00:11:51.401 #undef SPDK_CONFIG_CRYPTO 00:11:51.401 #undef SPDK_CONFIG_CRYPTO_MLX5 00:11:51.401 #undef SPDK_CONFIG_CUSTOMOCF 00:11:51.401 #undef SPDK_CONFIG_DAOS 00:11:51.401 #define SPDK_CONFIG_DAOS_DIR 00:11:51.401 #define SPDK_CONFIG_DEBUG 1 00:11:51.401 #undef SPDK_CONFIG_DPDK_COMPRESSDEV 00:11:51.401 #define SPDK_CONFIG_DPDK_DIR /home/vagrant/spdk_repo/spdk/dpdk/build 00:11:51.401 #define SPDK_CONFIG_DPDK_INC_DIR 00:11:51.401 #define SPDK_CONFIG_DPDK_LIB_DIR 00:11:51.401 #undef SPDK_CONFIG_DPDK_PKG_CONFIG 00:11:51.401 #undef SPDK_CONFIG_DPDK_UADK 00:11:51.401 #define SPDK_CONFIG_ENV /home/vagrant/spdk_repo/spdk/lib/env_dpdk 00:11:51.401 #define SPDK_CONFIG_EXAMPLES 1 00:11:51.401 #undef SPDK_CONFIG_FC 00:11:51.401 #define SPDK_CONFIG_FC_PATH 00:11:51.401 #define SPDK_CONFIG_FIO_PLUGIN 1 00:11:51.401 #define SPDK_CONFIG_FIO_SOURCE_DIR /usr/src/fio 00:11:51.401 #define SPDK_CONFIG_FSDEV 1 00:11:51.401 #undef SPDK_CONFIG_FUSE 00:11:51.401 #undef SPDK_CONFIG_FUZZER 00:11:51.401 #define SPDK_CONFIG_FUZZER_LIB 00:11:51.401 #undef SPDK_CONFIG_GOLANG 00:11:51.401 #define SPDK_CONFIG_HAVE_ARC4RANDOM 1 00:11:51.401 #define SPDK_CONFIG_HAVE_EVP_MAC 1 00:11:51.401 #define SPDK_CONFIG_HAVE_EXECINFO_H 1 00:11:51.401 #define SPDK_CONFIG_HAVE_KEYUTILS 1 00:11:51.401 #undef SPDK_CONFIG_HAVE_LIBARCHIVE 00:11:51.401 #undef SPDK_CONFIG_HAVE_LIBBSD 00:11:51.401 #undef SPDK_CONFIG_HAVE_LZ4 00:11:51.401 #define SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIM 1 00:11:51.401 #undef SPDK_CONFIG_HAVE_STRUCT_STAT_ST_ATIMESPEC 00:11:51.401 #define SPDK_CONFIG_HAVE_UUID_GENERATE_SHA1 1 00:11:51.401 #define SPDK_CONFIG_IDXD 1 00:11:51.401 #define SPDK_CONFIG_IDXD_KERNEL 1 00:11:51.401 #undef SPDK_CONFIG_IPSEC_MB 00:11:51.401 #define SPDK_CONFIG_IPSEC_MB_DIR 00:11:51.401 #define SPDK_CONFIG_ISAL 1 00:11:51.401 #define SPDK_CONFIG_ISAL_CRYPTO 1 00:11:51.401 #define SPDK_CONFIG_ISCSI_INITIATOR 1 00:11:51.401 #define SPDK_CONFIG_LIBDIR 00:11:51.401 #undef SPDK_CONFIG_LTO 00:11:51.401 #define SPDK_CONFIG_MAX_LCORES 128 00:11:51.401 #define SPDK_CONFIG_MAX_NUMA_NODES 1 00:11:51.401 #define SPDK_CONFIG_NVME_CUSE 1 00:11:51.401 #undef SPDK_CONFIG_OCF 00:11:51.401 #define SPDK_CONFIG_OCF_PATH 00:11:51.401 #define SPDK_CONFIG_OPENSSL_PATH 00:11:51.401 #undef SPDK_CONFIG_PGO_CAPTURE 00:11:51.401 #define SPDK_CONFIG_PGO_DIR 00:11:51.401 #undef SPDK_CONFIG_PGO_USE 00:11:51.401 #define SPDK_CONFIG_PREFIX /usr/local 00:11:51.401 #undef SPDK_CONFIG_RAID5F 00:11:51.401 #undef SPDK_CONFIG_RBD 00:11:51.401 #define SPDK_CONFIG_RDMA 1 00:11:51.401 #define SPDK_CONFIG_RDMA_PROV verbs 00:11:51.401 #define SPDK_CONFIG_RDMA_SEND_WITH_INVAL 1 00:11:51.401 #define SPDK_CONFIG_RDMA_SET_ACK_TIMEOUT 1 00:11:51.401 #define SPDK_CONFIG_RDMA_SET_TOS 1 00:11:51.401 #define SPDK_CONFIG_SHARED 1 00:11:51.401 #undef SPDK_CONFIG_SMA 00:11:51.401 #define SPDK_CONFIG_TESTS 1 00:11:51.401 #undef SPDK_CONFIG_TSAN 00:11:51.401 #define SPDK_CONFIG_UBLK 1 00:11:51.401 #define SPDK_CONFIG_UBSAN 1 00:11:51.401 #undef SPDK_CONFIG_UNIT_TESTS 00:11:51.401 #undef SPDK_CONFIG_URING 00:11:51.401 #define SPDK_CONFIG_URING_PATH 00:11:51.401 #undef SPDK_CONFIG_URING_ZNS 00:11:51.401 #undef SPDK_CONFIG_USDT 00:11:51.401 #undef SPDK_CONFIG_VBDEV_COMPRESS 00:11:51.401 #undef SPDK_CONFIG_VBDEV_COMPRESS_MLX5 00:11:51.401 #undef SPDK_CONFIG_VFIO_USER 00:11:51.401 #define SPDK_CONFIG_VFIO_USER_DIR 00:11:51.401 #define SPDK_CONFIG_VHOST 1 00:11:51.401 #define SPDK_CONFIG_VIRTIO 1 00:11:51.401 #undef SPDK_CONFIG_VTUNE 00:11:51.401 #define SPDK_CONFIG_VTUNE_DIR 00:11:51.401 #define SPDK_CONFIG_WERROR 1 00:11:51.401 #define SPDK_CONFIG_WPDK_DIR 00:11:51.401 #define SPDK_CONFIG_XNVME 1 00:11:51.401 #endif /* SPDK_CONFIG_H */ == *\#\d\e\f\i\n\e\ \S\P\D\K\_\C\O\N\F\I\G\_\D\E\B\U\G* ]] 00:11:51.401 02:07:16 nvme_xnvme -- common/applications.sh@24 -- # (( SPDK_AUTOTEST_DEBUG_APPS )) 00:11:51.401 02:07:16 nvme_xnvme -- common/autotest_common.sh@55 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.401 02:07:16 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.401 02:07:16 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.401 02:07:16 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.401 02:07:16 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.401 02:07:16 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.401 02:07:16 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.401 02:07:16 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.401 02:07:16 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:51.402 02:07:16 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@56 -- # source /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@6 -- # dirname /home/vagrant/spdk_repo/spdk/scripts/perf/pm/common 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@6 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@6 -- # _pmdir=/home/vagrant/spdk_repo/spdk/scripts/perf/pm 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@7 -- # readlink -f /home/vagrant/spdk_repo/spdk/scripts/perf/pm/../../../ 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@7 -- # _pmrootdir=/home/vagrant/spdk_repo/spdk 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@64 -- # TEST_TAG=N/A 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@65 -- # TEST_TAG_FILE=/home/vagrant/spdk_repo/spdk/.run_test_name 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@67 -- # PM_OUTPUTDIR=/home/vagrant/spdk_repo/spdk/../output/power 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@68 -- # uname -s 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@68 -- # PM_OS=Linux 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@70 -- # MONITOR_RESOURCES_SUDO=() 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@70 -- # declare -A MONITOR_RESOURCES_SUDO 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@71 -- # MONITOR_RESOURCES_SUDO["collect-bmc-pm"]=1 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@72 -- # MONITOR_RESOURCES_SUDO["collect-cpu-load"]=0 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@73 -- # MONITOR_RESOURCES_SUDO["collect-cpu-temp"]=0 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@74 -- # MONITOR_RESOURCES_SUDO["collect-vmstat"]=0 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@76 -- # SUDO[0]= 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@76 -- # SUDO[1]='sudo -E' 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@78 -- # MONITOR_RESOURCES=(collect-cpu-load collect-vmstat) 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@79 -- # [[ Linux == FreeBSD ]] 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@81 -- # [[ Linux == Linux ]] 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@81 -- # [[ QEMU != QEMU ]] 00:11:51.402 02:07:16 nvme_xnvme -- pm/common@88 -- # [[ ! -d /home/vagrant/spdk_repo/spdk/../output/power ]] 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@58 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@59 -- # export RUN_NIGHTLY 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@62 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@63 -- # export SPDK_AUTOTEST_DEBUG_APPS 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@64 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@65 -- # export SPDK_RUN_VALGRIND 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@66 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@67 -- # export SPDK_RUN_FUNCTIONAL_TEST 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@68 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@69 -- # export SPDK_TEST_UNITTEST 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@70 -- # : 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@71 -- # export SPDK_TEST_AUTOBUILD 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@72 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@73 -- # export SPDK_TEST_RELEASE_BUILD 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@74 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@75 -- # export SPDK_TEST_ISAL 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@76 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@77 -- # export SPDK_TEST_ISCSI 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@78 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@79 -- # export SPDK_TEST_ISCSI_INITIATOR 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@80 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@81 -- # export SPDK_TEST_NVME 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@82 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@83 -- # export SPDK_TEST_NVME_PMR 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@84 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@85 -- # export SPDK_TEST_NVME_BP 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@86 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@87 -- # export SPDK_TEST_NVME_CLI 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@88 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@89 -- # export SPDK_TEST_NVME_CUSE 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@90 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@91 -- # export SPDK_TEST_NVME_FDP 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@92 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@93 -- # export SPDK_TEST_NVMF 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@94 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@95 -- # export SPDK_TEST_VFIOUSER 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@96 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@97 -- # export SPDK_TEST_VFIOUSER_QEMU 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@98 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@99 -- # export SPDK_TEST_FUZZER 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@100 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@101 -- # export SPDK_TEST_FUZZER_SHORT 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@102 -- # : rdma 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@103 -- # export SPDK_TEST_NVMF_TRANSPORT 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@104 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@105 -- # export SPDK_TEST_RBD 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@106 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@107 -- # export SPDK_TEST_VHOST 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@108 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@109 -- # export SPDK_TEST_BLOCKDEV 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@110 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@111 -- # export SPDK_TEST_RAID 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@112 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@113 -- # export SPDK_TEST_IOAT 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@114 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@115 -- # export SPDK_TEST_BLOBFS 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@116 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@117 -- # export SPDK_TEST_VHOST_INIT 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@118 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@119 -- # export SPDK_TEST_LVOL 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@120 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@121 -- # export SPDK_TEST_VBDEV_COMPRESS 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@122 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@123 -- # export SPDK_RUN_ASAN 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@124 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@125 -- # export SPDK_RUN_UBSAN 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@126 -- # : 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@127 -- # export SPDK_RUN_EXTERNAL_DPDK 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@128 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@129 -- # export SPDK_RUN_NON_ROOT 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@130 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@131 -- # export SPDK_TEST_CRYPTO 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@132 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@133 -- # export SPDK_TEST_FTL 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@134 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@135 -- # export SPDK_TEST_OCF 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@136 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@137 -- # export SPDK_TEST_VMD 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@138 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@139 -- # export SPDK_TEST_OPAL 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@140 -- # : 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@141 -- # export SPDK_TEST_NATIVE_DPDK 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@142 -- # : true 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@143 -- # export SPDK_AUTOTEST_X 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@144 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@145 -- # export SPDK_TEST_URING 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@146 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@147 -- # export SPDK_TEST_USDT 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@148 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@149 -- # export SPDK_TEST_USE_IGB_UIO 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@150 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@151 -- # export SPDK_TEST_SCHEDULER 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@152 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@153 -- # export SPDK_TEST_SCANBUILD 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@154 -- # : 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@155 -- # export SPDK_TEST_NVMF_NICS 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@156 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@157 -- # export SPDK_TEST_SMA 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@158 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@159 -- # export SPDK_TEST_DAOS 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@160 -- # : 1 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@161 -- # export SPDK_TEST_XNVME 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@162 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@163 -- # export SPDK_TEST_ACCEL 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@164 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@165 -- # export SPDK_TEST_ACCEL_DSA 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@166 -- # : 0 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@167 -- # export SPDK_TEST_ACCEL_IAA 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@169 -- # : 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@170 -- # export SPDK_TEST_FUZZER_TARGET 00:11:51.402 02:07:16 nvme_xnvme -- common/autotest_common.sh@171 -- # : 0 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@172 -- # export SPDK_TEST_NVMF_MDNS 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@173 -- # : 0 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@174 -- # export SPDK_JSONRPC_GO_CLIENT 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@175 -- # : 0 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@176 -- # export SPDK_TEST_SETUP 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@177 -- # : 0 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@178 -- # export SPDK_TEST_NVME_INTERRUPT 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@181 -- # export SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@181 -- # SPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/lib 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@182 -- # export DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@182 -- # DPDK_LIB_DIR=/home/vagrant/spdk_repo/spdk/dpdk/build/lib 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@183 -- # export VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@183 -- # VFIO_LIB_DIR=/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@184 -- # export LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@184 -- # LD_LIBRARY_PATH=:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib:/home/vagrant/spdk_repo/spdk/build/lib:/home/vagrant/spdk_repo/spdk/dpdk/build/lib:/home/vagrant/spdk_repo/spdk/build/libvfio-user/usr/local/lib 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@187 -- # export PCI_BLOCK_SYNC_ON_RESET=yes 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@187 -- # PCI_BLOCK_SYNC_ON_RESET=yes 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@191 -- # export PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@191 -- # PYTHONPATH=:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python:/home/vagrant/spdk_repo/spdk/test/rpc_plugins:/home/vagrant/spdk_repo/spdk/python 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@195 -- # export PYTHONDONTWRITEBYTECODE=1 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@195 -- # PYTHONDONTWRITEBYTECODE=1 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@199 -- # export ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@199 -- # ASAN_OPTIONS=new_delete_type_mismatch=0:disable_coredump=0:abort_on_error=1:use_sigaltstack=0 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@200 -- # export UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@200 -- # UBSAN_OPTIONS=halt_on_error=1:print_stacktrace=1:abort_on_error=1:disable_coredump=0:exitcode=134 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@204 -- # asan_suppression_file=/var/tmp/asan_suppression_file 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@205 -- # rm -rf /var/tmp/asan_suppression_file 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@206 -- # cat 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@242 -- # echo leak:libfuse3.so 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@244 -- # export LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@244 -- # LSAN_OPTIONS=suppressions=/var/tmp/asan_suppression_file 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@246 -- # export DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@246 -- # DEFAULT_RPC_ADDR=/var/tmp/spdk.sock 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@248 -- # '[' -z /var/spdk/dependencies ']' 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@251 -- # export DEPENDENCY_DIR 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@255 -- # export SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@255 -- # SPDK_BIN_DIR=/home/vagrant/spdk_repo/spdk/build/bin 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@256 -- # export SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@256 -- # SPDK_EXAMPLE_DIR=/home/vagrant/spdk_repo/spdk/build/examples 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@259 -- # export QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@259 -- # QEMU_BIN=/usr/local/qemu/vanilla-latest/bin/qemu-system-x86_64 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@260 -- # export VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@260 -- # VFIO_QEMU_BIN=/usr/local/qemu/vfio-user-latest/bin/qemu-system-x86_64 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@262 -- # export AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@262 -- # AR_TOOL=/home/vagrant/spdk_repo/spdk/scripts/ar-xnvme-fixer 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@265 -- # export UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@265 -- # UNBIND_ENTIRE_IOMMU_GROUP=yes 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@267 -- # _LCOV_MAIN=0 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@268 -- # _LCOV_LLVM=1 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@269 -- # _LCOV= 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ '' == *clang* ]] 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@270 -- # [[ 0 -eq 1 ]] 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@272 -- # _lcov_opt[_LCOV_LLVM]='--gcov-tool /home/vagrant/spdk_repo/spdk/test/fuzz/llvm/llvm-gcov.sh' 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@273 -- # _lcov_opt[_LCOV_MAIN]= 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@275 -- # lcov_opt= 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@278 -- # '[' 0 -eq 0 ']' 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@279 -- # export valgrind= 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@279 -- # valgrind= 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@285 -- # uname -s 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@285 -- # '[' Linux = Linux ']' 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@286 -- # HUGEMEM=4096 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@287 -- # export CLEAR_HUGE=yes 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@287 -- # CLEAR_HUGE=yes 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@289 -- # MAKE=make 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@290 -- # MAKEFLAGS=-j10 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@306 -- # export HUGEMEM=4096 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@306 -- # HUGEMEM=4096 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@308 -- # NO_HUGE=() 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@309 -- # TEST_MODE= 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@331 -- # [[ -z 70364 ]] 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@331 -- # kill -0 70364 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@1696 -- # set_test_storage 2147483648 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@341 -- # [[ -v testdir ]] 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@343 -- # local requested_size=2147483648 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@344 -- # local mount target_dir 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@346 -- # local -A mounts fss sizes avails uses 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@347 -- # local source fs size avail mount use 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@349 -- # local storage_fallback storage_candidates 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@351 -- # mktemp -udt spdk.XXXXXX 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@351 -- # storage_fallback=/tmp/spdk.efcOJm 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@356 -- # storage_candidates=("$testdir" "$storage_fallback/tests/${testdir##*/}" "$storage_fallback") 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@358 -- # [[ -n '' ]] 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@363 -- # [[ -n '' ]] 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@368 -- # mkdir -p /home/vagrant/spdk_repo/spdk/test/nvme/xnvme /tmp/spdk.efcOJm/tests/xnvme /tmp/spdk.efcOJm 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@371 -- # requested_size=2214592512 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@340 -- # df -T 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@340 -- # grep -v Filesystem 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971664896 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596286976 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=devtmpfs 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=devtmpfs 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=4194304 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=4194304 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=0 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.403 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6260629504 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265393152 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=4763648 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=2493362176 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=2506158080 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12795904 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda5 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=btrfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=13971664896 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=20314062848 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=5596286976 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=6265249792 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=6265397248 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=147456 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda2 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=ext4 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=840085504 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1012768768 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=103477248 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=/dev/vda3 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=vfat 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=91617280 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=104607744 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12990464 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=tmpfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=tmpfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=1253064704 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=1253076992 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=12288 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # mounts["$mount"]=:/mnt/jenkins_nvme/jenkins/workspace/nvme-vg-autotest/fedora39-libvirt/output 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@374 -- # fss["$mount"]=fuse.sshfs 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # avails["$mount"]=98871054336 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@375 -- # sizes["$mount"]=105088212992 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@376 -- # uses["$mount"]=831725568 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@373 -- # read -r source fs size use avail _ mount 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@379 -- # printf '* Looking for test storage...\n' 00:11:51.404 * Looking for test storage... 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@381 -- # local target_space new_size 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@382 -- # for target_dir in "${storage_candidates[@]}" 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@385 -- # df /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@385 -- # awk '$1 !~ /Filesystem/{print $6}' 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@385 -- # mount=/home 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@387 -- # target_space=13971664896 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@388 -- # (( target_space == 0 || target_space < requested_size )) 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@391 -- # (( target_space >= requested_size )) 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == tmpfs ]] 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ btrfs == ramfs ]] 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@393 -- # [[ /home == / ]] 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@400 -- # export SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@400 -- # SPDK_TEST_STORAGE=/home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@401 -- # printf '* Found test storage at %s\n' /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.404 * Found test storage at /home/vagrant/spdk_repo/spdk/test/nvme/xnvme 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@402 -- # return 0 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@1698 -- # set -o errtrace 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@1699 -- # shopt -s extdebug 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@1700 -- # trap 'trap - ERR; print_backtrace >&2' ERR 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@1702 -- # PS4=' \t ${test_domain:-} -- ${BASH_SOURCE#${BASH_SOURCE%/*/*}/}@${LINENO} -- \$ ' 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@1703 -- # true 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@1705 -- # xtrace_fd 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -n 13 ]] 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@25 -- # [[ -e /proc/self/fd/13 ]] 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@27 -- # exec 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@29 -- # exec 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@31 -- # xtrace_restore 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@16 -- # unset -v 'X_STACK[0 - 1 < 0 ? 0 : 0 - 1]' 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@17 -- # (( 0 == 0 )) 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@18 -- # set -x 00:11:51.404 02:07:16 nvme_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:11:51.666 02:07:16 nvme_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:11:51.666 02:07:16 nvme_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:11:51.666 02:07:16 nvme_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@345 -- # : 1 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@365 -- # decimal 1 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=1 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 1 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@366 -- # decimal 2 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@353 -- # local d=2 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@355 -- # echo 2 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:11:51.666 02:07:16 nvme_xnvme -- scripts/common.sh@368 -- # return 0 00:11:51.666 02:07:16 nvme_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:11:51.666 02:07:16 nvme_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:11:51.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.666 --rc genhtml_branch_coverage=1 00:11:51.666 --rc genhtml_function_coverage=1 00:11:51.666 --rc genhtml_legend=1 00:11:51.666 --rc geninfo_all_blocks=1 00:11:51.666 --rc geninfo_unexecuted_blocks=1 00:11:51.666 00:11:51.666 ' 00:11:51.666 02:07:16 nvme_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:11:51.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.666 --rc genhtml_branch_coverage=1 00:11:51.666 --rc genhtml_function_coverage=1 00:11:51.666 --rc genhtml_legend=1 00:11:51.666 --rc geninfo_all_blocks=1 00:11:51.666 --rc geninfo_unexecuted_blocks=1 00:11:51.666 00:11:51.666 ' 00:11:51.666 02:07:16 nvme_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:11:51.666 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.666 --rc genhtml_branch_coverage=1 00:11:51.667 --rc genhtml_function_coverage=1 00:11:51.667 --rc genhtml_legend=1 00:11:51.667 --rc geninfo_all_blocks=1 00:11:51.667 --rc geninfo_unexecuted_blocks=1 00:11:51.667 00:11:51.667 ' 00:11:51.667 02:07:16 nvme_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:11:51.667 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:11:51.667 --rc genhtml_branch_coverage=1 00:11:51.667 --rc genhtml_function_coverage=1 00:11:51.667 --rc genhtml_legend=1 00:11:51.667 --rc geninfo_all_blocks=1 00:11:51.667 --rc geninfo_unexecuted_blocks=1 00:11:51.667 00:11:51.667 ' 00:11:51.667 02:07:16 nvme_xnvme -- dd/common.sh@7 -- # source /home/vagrant/spdk_repo/spdk/scripts/common.sh 00:11:51.667 02:07:16 nvme_xnvme -- scripts/common.sh@15 -- # shopt -s extglob 00:11:51.667 02:07:16 nvme_xnvme -- scripts/common.sh@544 -- # [[ -e /bin/wpdk_common.sh ]] 00:11:51.667 02:07:16 nvme_xnvme -- scripts/common.sh@552 -- # [[ -e /etc/opt/spdk-pkgdep/paths/export.sh ]] 00:11:51.667 02:07:16 nvme_xnvme -- scripts/common.sh@553 -- # source /etc/opt/spdk-pkgdep/paths/export.sh 00:11:51.667 02:07:16 nvme_xnvme -- paths/export.sh@2 -- # PATH=/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.667 02:07:16 nvme_xnvme -- paths/export.sh@3 -- # PATH=/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.667 02:07:16 nvme_xnvme -- paths/export.sh@4 -- # PATH=/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.667 02:07:16 nvme_xnvme -- paths/export.sh@5 -- # export PATH 00:11:51.667 02:07:16 nvme_xnvme -- paths/export.sh@6 -- # echo /opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/opt/protoc/21.7/bin:/opt/go/1.21.1/bin:/opt/golangci/1.54.2/bin:/usr/local/bin:/usr/local/sbin:/var/spdk/dependencies/pip/bin:/usr/local/sbin:/usr/local/bin:/usr/sbin:/usr/bin:/sbin:/bin:/var/lib/snapd/snap/bin 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@12 -- # xnvme_io=('libaio' 'io_uring' 'io_uring_cmd') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@12 -- # declare -a xnvme_io 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@18 -- # libaio=('randread' 'randwrite') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@18 -- # declare -a libaio 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@23 -- # io_uring=('randread' 'randwrite') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@23 -- # declare -a io_uring 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@27 -- # io_uring_cmd=('randread' 'randwrite' 'unmap' 'write_zeroes') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@27 -- # declare -a io_uring_cmd 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@33 -- # libaio_fio=('randread' 'randwrite') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@33 -- # declare -a libaio_fio 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@37 -- # io_uring_fio=('randread' 'randwrite') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@37 -- # declare -a io_uring_fio 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@41 -- # io_uring_cmd_fio=('randread' 'randwrite') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@41 -- # declare -a io_uring_cmd_fio 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@45 -- # xnvme_filename=(['libaio']='/dev/nvme0n1' ['io_uring']='/dev/nvme0n1' ['io_uring_cmd']='/dev/ng0n1') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@45 -- # declare -A xnvme_filename 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@51 -- # xnvme_conserve_cpu=('false' 'true') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@51 -- # declare -a xnvme_conserve_cpu 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@57 -- # method_bdev_xnvme_create_0=(['name']='xnvme_bdev' ['filename']='/dev/nvme0n1' ['io_mechanism']='libaio' ['conserve_cpu']='false') 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@57 -- # declare -A method_bdev_xnvme_create_0 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@89 -- # prep_nvme 00:11:51.667 02:07:16 nvme_xnvme -- xnvme/common.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:11:51.928 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:11:52.190 Waiting for block devices as requested 00:11:52.190 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.190 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.190 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:11:52.452 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:11:57.745 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:11:57.745 02:07:22 nvme_xnvme -- xnvme/common.sh@73 -- # modprobe -r nvme 00:11:57.745 02:07:22 nvme_xnvme -- xnvme/common.sh@74 -- # nproc 00:11:57.745 02:07:22 nvme_xnvme -- xnvme/common.sh@74 -- # modprobe nvme poll_queues=10 00:11:58.007 02:07:22 nvme_xnvme -- xnvme/common.sh@77 -- # local nvme 00:11:58.007 02:07:22 nvme_xnvme -- xnvme/common.sh@78 -- # for nvme in /dev/nvme*n!(*p*) 00:11:58.007 02:07:22 nvme_xnvme -- xnvme/common.sh@79 -- # block_in_use /dev/nvme0n1 00:11:58.007 02:07:22 nvme_xnvme -- scripts/common.sh@381 -- # local block=/dev/nvme0n1 pt 00:11:58.007 02:07:22 nvme_xnvme -- scripts/common.sh@390 -- # /home/vagrant/spdk_repo/spdk/scripts/spdk-gpt.py /dev/nvme0n1 00:11:58.268 No valid GPT data, bailing 00:11:58.268 02:07:22 nvme_xnvme -- scripts/common.sh@394 -- # blkid -s PTTYPE -o value /dev/nvme0n1 00:11:58.268 02:07:22 nvme_xnvme -- scripts/common.sh@394 -- # pt= 00:11:58.268 02:07:22 nvme_xnvme -- scripts/common.sh@395 -- # return 1 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/common.sh@80 -- # xnvme_filename["libaio"]=/dev/nvme0n1 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/common.sh@81 -- # xnvme_filename["io_uring"]=/dev/nvme0n1 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/common.sh@82 -- # xnvme_filename["io_uring_cmd"]=/dev/ng0n1 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/common.sh@83 -- # return 0 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@73 -- # trap 'killprocess "$spdk_tgt"' EXIT 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=libaio 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:11:58.268 02:07:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:11:58.268 02:07:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:11:58.268 02:07:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:11:58.268 02:07:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:11:58.268 ************************************ 00:11:58.268 START TEST xnvme_rpc 00:11:58.268 ************************************ 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=70755 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 70755 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 70755 ']' 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:11:58.268 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:58.268 02:07:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:11:58.268 [2024-12-15 02:07:22.913555] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:11:58.268 [2024-12-15 02:07:22.913695] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70755 ] 00:11:58.530 [2024-12-15 02:07:23.078825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:11:58.530 [2024-12-15 02:07:23.195766] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio '' 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.475 xnvme_bdev 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.475 02:07:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 70755 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 70755 ']' 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 70755 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 70755 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:11:59.475 killing process with pid 70755 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 70755' 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 70755 00:11:59.475 02:07:24 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 70755 00:12:01.392 ************************************ 00:12:01.392 END TEST xnvme_rpc 00:12:01.392 ************************************ 00:12:01.392 00:12:01.392 real 0m2.873s 00:12:01.392 user 0m2.855s 00:12:01.392 sys 0m0.486s 00:12:01.392 02:07:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:01.392 02:07:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:01.392 02:07:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:01.392 02:07:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:01.392 02:07:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:01.392 02:07:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:01.392 ************************************ 00:12:01.392 START TEST xnvme_bdevperf 00:12:01.392 ************************************ 00:12:01.392 02:07:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:01.392 02:07:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:01.392 02:07:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:01.392 02:07:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:01.392 02:07:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:01.392 02:07:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:01.392 02:07:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:01.392 02:07:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:01.392 { 00:12:01.392 "subsystems": [ 00:12:01.392 { 00:12:01.392 "subsystem": "bdev", 00:12:01.392 "config": [ 00:12:01.392 { 00:12:01.392 "params": { 00:12:01.392 "io_mechanism": "libaio", 00:12:01.392 "conserve_cpu": false, 00:12:01.392 "filename": "/dev/nvme0n1", 00:12:01.392 "name": "xnvme_bdev" 00:12:01.392 }, 00:12:01.392 "method": "bdev_xnvme_create" 00:12:01.392 }, 00:12:01.392 { 00:12:01.392 "method": "bdev_wait_for_examine" 00:12:01.392 } 00:12:01.392 ] 00:12:01.392 } 00:12:01.392 ] 00:12:01.392 } 00:12:01.392 [2024-12-15 02:07:25.844095] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:01.392 [2024-12-15 02:07:25.844259] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70824 ] 00:12:01.392 [2024-12-15 02:07:26.006611] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:01.392 [2024-12-15 02:07:26.123711] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:01.963 Running I/O for 5 seconds... 00:12:03.845 30007.00 IOPS, 117.21 MiB/s [2024-12-15T02:07:29.554Z] 29969.50 IOPS, 117.07 MiB/s [2024-12-15T02:07:30.553Z] 29436.67 IOPS, 114.99 MiB/s [2024-12-15T02:07:31.496Z] 28793.50 IOPS, 112.47 MiB/s 00:12:06.731 Latency(us) 00:12:06.731 [2024-12-15T02:07:31.496Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:06.731 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:06.731 xnvme_bdev : 5.00 27915.43 109.04 0.00 0.00 2287.76 196.92 7259.37 00:12:06.731 [2024-12-15T02:07:31.496Z] =================================================================================================================== 00:12:06.731 [2024-12-15T02:07:31.496Z] Total : 27915.43 109.04 0.00 0.00 2287.76 196.92 7259.37 00:12:07.675 02:07:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:07.675 02:07:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:07.675 02:07:32 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:07.675 02:07:32 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:07.675 02:07:32 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:07.675 { 00:12:07.675 "subsystems": [ 00:12:07.675 { 00:12:07.675 "subsystem": "bdev", 00:12:07.675 "config": [ 00:12:07.675 { 00:12:07.675 "params": { 00:12:07.675 "io_mechanism": "libaio", 00:12:07.675 "conserve_cpu": false, 00:12:07.675 "filename": "/dev/nvme0n1", 00:12:07.675 "name": "xnvme_bdev" 00:12:07.675 }, 00:12:07.675 "method": "bdev_xnvme_create" 00:12:07.675 }, 00:12:07.675 { 00:12:07.675 "method": "bdev_wait_for_examine" 00:12:07.675 } 00:12:07.675 ] 00:12:07.675 } 00:12:07.675 ] 00:12:07.675 } 00:12:07.675 [2024-12-15 02:07:32.300703] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:07.675 [2024-12-15 02:07:32.300846] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid70907 ] 00:12:07.936 [2024-12-15 02:07:32.465443] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:07.936 [2024-12-15 02:07:32.582938] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:08.199 Running I/O for 5 seconds... 00:12:10.524 38227.00 IOPS, 149.32 MiB/s [2024-12-15T02:07:36.232Z] 37439.50 IOPS, 146.25 MiB/s [2024-12-15T02:07:37.175Z] 37425.67 IOPS, 146.19 MiB/s [2024-12-15T02:07:38.117Z] 37148.50 IOPS, 145.11 MiB/s [2024-12-15T02:07:38.117Z] 36349.40 IOPS, 141.99 MiB/s 00:12:13.352 Latency(us) 00:12:13.352 [2024-12-15T02:07:38.117Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:13.352 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:13.352 xnvme_bdev : 5.01 36324.18 141.89 0.00 0.00 1757.46 412.75 8519.68 00:12:13.352 [2024-12-15T02:07:38.117Z] =================================================================================================================== 00:12:13.352 [2024-12-15T02:07:38.117Z] Total : 36324.18 141.89 0.00 0.00 1757.46 412.75 8519.68 00:12:14.295 00:12:14.295 real 0m12.922s 00:12:14.295 user 0m5.059s 00:12:14.295 sys 0m6.259s 00:12:14.295 ************************************ 00:12:14.295 END TEST xnvme_bdevperf 00:12:14.295 ************************************ 00:12:14.295 02:07:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:14.295 02:07:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:14.295 02:07:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:14.295 02:07:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:14.295 02:07:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:14.295 02:07:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:14.295 ************************************ 00:12:14.295 START TEST xnvme_fio_plugin 00:12:14.295 ************************************ 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:14.295 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:14.296 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:14.296 02:07:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:14.296 { 00:12:14.296 "subsystems": [ 00:12:14.296 { 00:12:14.296 "subsystem": "bdev", 00:12:14.296 "config": [ 00:12:14.296 { 00:12:14.296 "params": { 00:12:14.296 "io_mechanism": "libaio", 00:12:14.296 "conserve_cpu": false, 00:12:14.296 "filename": "/dev/nvme0n1", 00:12:14.296 "name": "xnvme_bdev" 00:12:14.296 }, 00:12:14.296 "method": "bdev_xnvme_create" 00:12:14.296 }, 00:12:14.296 { 00:12:14.296 "method": "bdev_wait_for_examine" 00:12:14.296 } 00:12:14.296 ] 00:12:14.296 } 00:12:14.296 ] 00:12:14.296 } 00:12:14.296 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:14.296 fio-3.35 00:12:14.296 Starting 1 thread 00:12:20.887 00:12:20.887 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71022: Sun Dec 15 02:07:44 2024 00:12:20.887 read: IOPS=34.5k, BW=135MiB/s (141MB/s)(674MiB/5002msec) 00:12:20.887 slat (usec): min=4, max=1900, avg=20.98, stdev=88.28 00:12:20.887 clat (usec): min=105, max=7630, avg=1285.02, stdev=537.59 00:12:20.887 lat (usec): min=186, max=7634, avg=1306.01, stdev=531.00 00:12:20.887 clat percentiles (usec): 00:12:20.887 | 1.00th=[ 269], 5.00th=[ 469], 10.00th=[ 627], 20.00th=[ 840], 00:12:20.887 | 30.00th=[ 1004], 40.00th=[ 1123], 50.00th=[ 1254], 60.00th=[ 1385], 00:12:20.887 | 70.00th=[ 1516], 80.00th=[ 1680], 90.00th=[ 1926], 95.00th=[ 2180], 00:12:20.887 | 99.00th=[ 2933], 99.50th=[ 3326], 99.90th=[ 3949], 99.95th=[ 4228], 00:12:20.887 | 99.99th=[ 5145] 00:12:20.887 bw ( KiB/s): min=124328, max=148880, per=99.51%, avg=137376.67, stdev=8009.10, samples=9 00:12:20.887 iops : min=31082, max=37220, avg=34344.11, stdev=2002.23, samples=9 00:12:20.887 lat (usec) : 250=0.77%, 500=5.03%, 750=9.23%, 1000=14.73% 00:12:20.887 lat (msec) : 2=62.44%, 4=7.72%, 10=0.08% 00:12:20.887 cpu : usr=40.71%, sys=50.03%, ctx=12, majf=0, minf=764 00:12:20.887 IO depths : 1=0.4%, 2=1.2%, 4=3.2%, 8=8.7%, 16=23.6%, 32=60.8%, >=64=2.1% 00:12:20.887 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:20.887 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.6%, >=64=0.0% 00:12:20.887 issued rwts: total=172627,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:20.887 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:20.887 00:12:20.887 Run status group 0 (all jobs): 00:12:20.887 READ: bw=135MiB/s (141MB/s), 135MiB/s-135MiB/s (141MB/s-141MB/s), io=674MiB (707MB), run=5002-5002msec 00:12:21.149 ----------------------------------------------------- 00:12:21.149 Suppressions used: 00:12:21.149 count bytes template 00:12:21.149 1 11 /usr/src/fio/parse.c 00:12:21.149 1 8 libtcmalloc_minimal.so 00:12:21.149 1 904 libcrypto.so 00:12:21.149 ----------------------------------------------------- 00:12:21.149 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:21.149 02:07:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:21.149 { 00:12:21.149 "subsystems": [ 00:12:21.149 { 00:12:21.149 "subsystem": "bdev", 00:12:21.149 "config": [ 00:12:21.149 { 00:12:21.149 "params": { 00:12:21.149 "io_mechanism": "libaio", 00:12:21.149 "conserve_cpu": false, 00:12:21.149 "filename": "/dev/nvme0n1", 00:12:21.149 "name": "xnvme_bdev" 00:12:21.149 }, 00:12:21.149 "method": "bdev_xnvme_create" 00:12:21.149 }, 00:12:21.149 { 00:12:21.149 "method": "bdev_wait_for_examine" 00:12:21.149 } 00:12:21.149 ] 00:12:21.149 } 00:12:21.149 ] 00:12:21.149 } 00:12:21.149 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:21.149 fio-3.35 00:12:21.149 Starting 1 thread 00:12:27.738 00:12:27.738 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71118: Sun Dec 15 02:07:51 2024 00:12:27.738 write: IOPS=37.3k, BW=146MiB/s (153MB/s)(728MiB/5001msec); 0 zone resets 00:12:27.738 slat (usec): min=4, max=2476, avg=20.17, stdev=77.95 00:12:27.738 clat (usec): min=98, max=7989, avg=1175.73, stdev=545.08 00:12:27.738 lat (usec): min=194, max=8011, avg=1195.90, stdev=540.31 00:12:27.738 clat percentiles (usec): 00:12:27.738 | 1.00th=[ 260], 5.00th=[ 408], 10.00th=[ 553], 20.00th=[ 734], 00:12:27.738 | 30.00th=[ 881], 40.00th=[ 1004], 50.00th=[ 1123], 60.00th=[ 1237], 00:12:27.738 | 70.00th=[ 1385], 80.00th=[ 1565], 90.00th=[ 1827], 95.00th=[ 2089], 00:12:27.738 | 99.00th=[ 2835], 99.50th=[ 3195], 99.90th=[ 4015], 99.95th=[ 4621], 00:12:27.738 | 99.99th=[ 7898] 00:12:27.738 bw ( KiB/s): min=137048, max=161256, per=98.90%, avg=147417.33, stdev=10836.63, samples=9 00:12:27.738 iops : min=34262, max=40314, avg=36854.33, stdev=2709.16, samples=9 00:12:27.738 lat (usec) : 100=0.01%, 250=0.88%, 500=7.04%, 750=12.91%, 1000=19.18% 00:12:27.738 lat (msec) : 2=53.67%, 4=6.22%, 10=0.10% 00:12:27.738 cpu : usr=37.42%, sys=51.26%, ctx=15, majf=0, minf=765 00:12:27.738 IO depths : 1=0.4%, 2=1.0%, 4=3.0%, 8=8.9%, 16=24.0%, 32=60.7%, >=64=2.1% 00:12:27.738 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:27.738 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:27.738 issued rwts: total=0,186360,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:27.738 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:27.738 00:12:27.738 Run status group 0 (all jobs): 00:12:27.738 WRITE: bw=146MiB/s (153MB/s), 146MiB/s-146MiB/s (153MB/s-153MB/s), io=728MiB (763MB), run=5001-5001msec 00:12:27.999 ----------------------------------------------------- 00:12:27.999 Suppressions used: 00:12:27.999 count bytes template 00:12:27.999 1 11 /usr/src/fio/parse.c 00:12:27.999 1 8 libtcmalloc_minimal.so 00:12:27.999 1 904 libcrypto.so 00:12:27.999 ----------------------------------------------------- 00:12:27.999 00:12:27.999 00:12:27.999 real 0m13.881s 00:12:27.999 user 0m6.726s 00:12:27.999 sys 0m5.725s 00:12:27.999 ************************************ 00:12:28.000 END TEST xnvme_fio_plugin 00:12:28.000 ************************************ 00:12:28.000 02:07:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:28.000 02:07:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:28.000 02:07:52 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:28.000 02:07:52 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:12:28.000 02:07:52 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:12:28.000 02:07:52 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:28.000 02:07:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:28.000 02:07:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:28.000 02:07:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:28.000 ************************************ 00:12:28.000 START TEST xnvme_rpc 00:12:28.000 ************************************ 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71204 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71204 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71204 ']' 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:28.000 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:28.000 02:07:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:28.261 [2024-12-15 02:07:52.800375] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:28.261 [2024-12-15 02:07:52.800517] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71204 ] 00:12:28.261 [2024-12-15 02:07:52.966086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:28.522 [2024-12-15 02:07:53.088306] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev libaio -c 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.094 xnvme_bdev 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.094 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ libaio == \l\i\b\a\i\o ]] 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71204 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71204 ']' 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71204 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71204 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:29.356 killing process with pid 71204 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71204' 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71204 00:12:29.356 02:07:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71204 00:12:31.277 ************************************ 00:12:31.277 END TEST xnvme_rpc 00:12:31.277 ************************************ 00:12:31.277 00:12:31.277 real 0m2.919s 00:12:31.277 user 0m2.873s 00:12:31.277 sys 0m0.483s 00:12:31.277 02:07:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:31.277 02:07:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 02:07:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:12:31.277 02:07:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:31.277 02:07:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:31.277 02:07:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 ************************************ 00:12:31.277 START TEST xnvme_bdevperf 00:12:31.277 ************************************ 00:12:31.277 02:07:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:12:31.277 02:07:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:12:31.277 02:07:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=libaio 00:12:31.277 02:07:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:31.277 02:07:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:12:31.277 02:07:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:31.277 02:07:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:31.277 02:07:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:31.277 { 00:12:31.277 "subsystems": [ 00:12:31.277 { 00:12:31.277 "subsystem": "bdev", 00:12:31.277 "config": [ 00:12:31.277 { 00:12:31.277 "params": { 00:12:31.277 "io_mechanism": "libaio", 00:12:31.277 "conserve_cpu": true, 00:12:31.277 "filename": "/dev/nvme0n1", 00:12:31.277 "name": "xnvme_bdev" 00:12:31.277 }, 00:12:31.277 "method": "bdev_xnvme_create" 00:12:31.277 }, 00:12:31.277 { 00:12:31.277 "method": "bdev_wait_for_examine" 00:12:31.277 } 00:12:31.277 ] 00:12:31.277 } 00:12:31.277 ] 00:12:31.277 } 00:12:31.277 [2024-12-15 02:07:55.779189] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:31.277 [2024-12-15 02:07:55.779341] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71273 ] 00:12:31.277 [2024-12-15 02:07:55.943340] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:31.538 [2024-12-15 02:07:56.073629] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:31.798 Running I/O for 5 seconds... 00:12:33.756 32630.00 IOPS, 127.46 MiB/s [2024-12-15T02:07:59.465Z] 33485.50 IOPS, 130.80 MiB/s [2024-12-15T02:08:00.405Z] 32205.67 IOPS, 125.80 MiB/s [2024-12-15T02:08:01.793Z] 32469.25 IOPS, 126.83 MiB/s 00:12:37.028 Latency(us) 00:12:37.028 [2024-12-15T02:08:01.793Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:37.028 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:12:37.028 xnvme_bdev : 5.00 32887.38 128.47 0.00 0.00 1941.50 214.25 10233.70 00:12:37.028 [2024-12-15T02:08:01.793Z] =================================================================================================================== 00:12:37.028 [2024-12-15T02:08:01.793Z] Total : 32887.38 128.47 0.00 0.00 1941.50 214.25 10233.70 00:12:37.600 02:08:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:37.600 02:08:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:12:37.600 02:08:02 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:12:37.600 02:08:02 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:12:37.600 02:08:02 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:37.600 { 00:12:37.600 "subsystems": [ 00:12:37.600 { 00:12:37.600 "subsystem": "bdev", 00:12:37.600 "config": [ 00:12:37.600 { 00:12:37.600 "params": { 00:12:37.600 "io_mechanism": "libaio", 00:12:37.600 "conserve_cpu": true, 00:12:37.600 "filename": "/dev/nvme0n1", 00:12:37.600 "name": "xnvme_bdev" 00:12:37.600 }, 00:12:37.600 "method": "bdev_xnvme_create" 00:12:37.600 }, 00:12:37.600 { 00:12:37.600 "method": "bdev_wait_for_examine" 00:12:37.600 } 00:12:37.600 ] 00:12:37.600 } 00:12:37.600 ] 00:12:37.600 } 00:12:37.600 [2024-12-15 02:08:02.261353] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:37.600 [2024-12-15 02:08:02.261699] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71349 ] 00:12:37.860 [2024-12-15 02:08:02.426879] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:37.860 [2024-12-15 02:08:02.546756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:38.122 Running I/O for 5 seconds... 00:12:40.453 35469.00 IOPS, 138.55 MiB/s [2024-12-15T02:08:06.160Z] 36695.00 IOPS, 143.34 MiB/s [2024-12-15T02:08:07.101Z] 35915.00 IOPS, 140.29 MiB/s [2024-12-15T02:08:08.045Z] 35645.00 IOPS, 139.24 MiB/s [2024-12-15T02:08:08.045Z] 36058.20 IOPS, 140.85 MiB/s 00:12:43.280 Latency(us) 00:12:43.280 [2024-12-15T02:08:08.045Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:12:43.280 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:12:43.280 xnvme_bdev : 5.00 36026.71 140.73 0.00 0.00 1771.03 174.87 6755.25 00:12:43.280 [2024-12-15T02:08:08.045Z] =================================================================================================================== 00:12:43.280 [2024-12-15T02:08:08.045Z] Total : 36026.71 140.73 0.00 0.00 1771.03 174.87 6755.25 00:12:44.223 00:12:44.223 real 0m12.967s 00:12:44.223 user 0m5.085s 00:12:44.223 sys 0m6.126s 00:12:44.223 02:08:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:44.223 02:08:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:12:44.224 ************************************ 00:12:44.224 END TEST xnvme_bdevperf 00:12:44.224 ************************************ 00:12:44.224 02:08:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:12:44.224 02:08:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:44.224 02:08:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:44.224 02:08:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:44.224 ************************************ 00:12:44.224 START TEST xnvme_fio_plugin 00:12:44.224 ************************************ 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=libaio_fio 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:44.224 02:08:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:44.224 { 00:12:44.224 "subsystems": [ 00:12:44.224 { 00:12:44.224 "subsystem": "bdev", 00:12:44.224 "config": [ 00:12:44.224 { 00:12:44.224 "params": { 00:12:44.224 "io_mechanism": "libaio", 00:12:44.224 "conserve_cpu": true, 00:12:44.224 "filename": "/dev/nvme0n1", 00:12:44.224 "name": "xnvme_bdev" 00:12:44.224 }, 00:12:44.224 "method": "bdev_xnvme_create" 00:12:44.224 }, 00:12:44.224 { 00:12:44.224 "method": "bdev_wait_for_examine" 00:12:44.224 } 00:12:44.224 ] 00:12:44.224 } 00:12:44.224 ] 00:12:44.224 } 00:12:44.224 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:44.224 fio-3.35 00:12:44.224 Starting 1 thread 00:12:50.809 00:12:50.810 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71468: Sun Dec 15 02:08:14 2024 00:12:50.810 read: IOPS=33.5k, BW=131MiB/s (137MB/s)(655MiB/5001msec) 00:12:50.810 slat (usec): min=4, max=1994, avg=21.09, stdev=93.69 00:12:50.810 clat (usec): min=106, max=4696, avg=1338.68, stdev=524.41 00:12:50.810 lat (usec): min=213, max=4773, avg=1359.77, stdev=516.24 00:12:50.810 clat percentiles (usec): 00:12:50.810 | 1.00th=[ 293], 5.00th=[ 553], 10.00th=[ 709], 20.00th=[ 906], 00:12:50.810 | 30.00th=[ 1057], 40.00th=[ 1188], 50.00th=[ 1303], 60.00th=[ 1434], 00:12:50.810 | 70.00th=[ 1565], 80.00th=[ 1713], 90.00th=[ 1975], 95.00th=[ 2245], 00:12:50.810 | 99.00th=[ 2933], 99.50th=[ 3228], 99.90th=[ 3687], 99.95th=[ 3982], 00:12:50.810 | 99.99th=[ 4228] 00:12:50.810 bw ( KiB/s): min=118336, max=144992, per=100.00%, avg=134229.33, stdev=7468.36, samples=9 00:12:50.810 iops : min=29584, max=36248, avg=33557.33, stdev=1867.09, samples=9 00:12:50.810 lat (usec) : 250=0.59%, 500=3.26%, 750=7.90%, 1000=14.04% 00:12:50.810 lat (msec) : 2=64.83%, 4=9.34%, 10=0.05% 00:12:50.810 cpu : usr=43.30%, sys=48.16%, ctx=12, majf=0, minf=764 00:12:50.810 IO depths : 1=0.6%, 2=1.3%, 4=3.2%, 8=8.6%, 16=23.0%, 32=61.2%, >=64=2.1% 00:12:50.810 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:50.810 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.3%, 64=1.6%, >=64=0.0% 00:12:50.810 issued rwts: total=167749,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:50.810 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:50.810 00:12:50.810 Run status group 0 (all jobs): 00:12:50.810 READ: bw=131MiB/s (137MB/s), 131MiB/s-131MiB/s (137MB/s-137MB/s), io=655MiB (687MB), run=5001-5001msec 00:12:51.070 ----------------------------------------------------- 00:12:51.070 Suppressions used: 00:12:51.070 count bytes template 00:12:51.070 1 11 /usr/src/fio/parse.c 00:12:51.070 1 8 libtcmalloc_minimal.so 00:12:51.070 1 904 libcrypto.so 00:12:51.070 ----------------------------------------------------- 00:12:51.070 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:12:51.070 02:08:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:12:51.070 { 00:12:51.070 "subsystems": [ 00:12:51.070 { 00:12:51.070 "subsystem": "bdev", 00:12:51.070 "config": [ 00:12:51.070 { 00:12:51.070 "params": { 00:12:51.070 "io_mechanism": "libaio", 00:12:51.070 "conserve_cpu": true, 00:12:51.070 "filename": "/dev/nvme0n1", 00:12:51.070 "name": "xnvme_bdev" 00:12:51.070 }, 00:12:51.070 "method": "bdev_xnvme_create" 00:12:51.071 }, 00:12:51.071 { 00:12:51.071 "method": "bdev_wait_for_examine" 00:12:51.071 } 00:12:51.071 ] 00:12:51.071 } 00:12:51.071 ] 00:12:51.071 } 00:12:51.332 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:12:51.332 fio-3.35 00:12:51.332 Starting 1 thread 00:12:57.921 00:12:57.921 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71560: Sun Dec 15 02:08:21 2024 00:12:57.921 write: IOPS=34.4k, BW=134MiB/s (141MB/s)(672MiB/5001msec); 0 zone resets 00:12:57.921 slat (usec): min=4, max=2185, avg=22.25, stdev=87.21 00:12:57.921 clat (usec): min=59, max=9586, avg=1253.05, stdev=554.97 00:12:57.921 lat (usec): min=174, max=9590, avg=1275.30, stdev=548.97 00:12:57.921 clat percentiles (usec): 00:12:57.921 | 1.00th=[ 265], 5.00th=[ 433], 10.00th=[ 586], 20.00th=[ 791], 00:12:57.921 | 30.00th=[ 947], 40.00th=[ 1090], 50.00th=[ 1205], 60.00th=[ 1336], 00:12:57.921 | 70.00th=[ 1483], 80.00th=[ 1647], 90.00th=[ 1942], 95.00th=[ 2212], 00:12:57.921 | 99.00th=[ 2900], 99.50th=[ 3130], 99.90th=[ 3851], 99.95th=[ 4113], 00:12:57.921 | 99.99th=[ 7635] 00:12:57.921 bw ( KiB/s): min=116272, max=154712, per=99.69%, avg=137125.00, stdev=13429.96, samples=9 00:12:57.921 iops : min=29068, max=38678, avg=34281.22, stdev=3357.49, samples=9 00:12:57.921 lat (usec) : 100=0.01%, 250=0.83%, 500=5.97%, 750=10.88%, 1000=15.87% 00:12:57.921 lat (msec) : 2=57.93%, 4=8.46%, 10=0.06% 00:12:57.921 cpu : usr=37.24%, sys=53.18%, ctx=11, majf=0, minf=765 00:12:57.921 IO depths : 1=0.4%, 2=1.1%, 4=3.1%, 8=8.8%, 16=23.8%, 32=60.8%, >=64=2.1% 00:12:57.921 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:12:57.921 complete : 0=0.0%, 4=98.0%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.7%, >=64=0.0% 00:12:57.921 issued rwts: total=0,171978,0,0 short=0,0,0,0 dropped=0,0,0,0 00:12:57.921 latency : target=0, window=0, percentile=100.00%, depth=64 00:12:57.921 00:12:57.921 Run status group 0 (all jobs): 00:12:57.921 WRITE: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=672MiB (704MB), run=5001-5001msec 00:12:57.921 ----------------------------------------------------- 00:12:57.921 Suppressions used: 00:12:57.921 count bytes template 00:12:57.921 1 11 /usr/src/fio/parse.c 00:12:57.921 1 8 libtcmalloc_minimal.so 00:12:57.921 1 904 libcrypto.so 00:12:57.921 ----------------------------------------------------- 00:12:57.921 00:12:57.921 ************************************ 00:12:57.921 END TEST xnvme_fio_plugin 00:12:57.921 ************************************ 00:12:57.921 00:12:57.921 real 0m13.802s 00:12:57.921 user 0m6.818s 00:12:57.921 sys 0m5.689s 00:12:57.921 02:08:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:12:57.921 02:08:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:12:57.921 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:12:57.921 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring 00:12:57.921 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/nvme0n1 00:12:57.921 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/nvme0n1 00:12:57.921 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:12:57.921 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:12:57.921 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:12:57.922 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:12:57.922 02:08:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:12:57.922 02:08:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:12:57.922 02:08:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:12:57.922 02:08:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:12:57.922 ************************************ 00:12:57.922 START TEST xnvme_rpc 00:12:57.922 ************************************ 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=71646 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 71646 00:12:57.922 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 71646 ']' 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:12:57.922 02:08:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:58.183 [2024-12-15 02:08:22.692472] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:12:58.183 [2024-12-15 02:08:22.692619] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71646 ] 00:12:58.183 [2024-12-15 02:08:22.857278] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:12:58.444 [2024-12-15 02:08:22.977151] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring '' 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.017 xnvme_bdev 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.017 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 71646 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 71646 ']' 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 71646 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 71646 00:12:59.279 killing process with pid 71646 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 71646' 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 71646 00:12:59.279 02:08:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 71646 00:13:01.201 ************************************ 00:13:01.201 END TEST xnvme_rpc 00:13:01.201 ************************************ 00:13:01.201 00:13:01.201 real 0m2.891s 00:13:01.201 user 0m2.919s 00:13:01.201 sys 0m0.469s 00:13:01.201 02:08:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:01.201 02:08:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:01.201 02:08:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:01.201 02:08:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:01.201 02:08:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:01.201 02:08:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:01.201 ************************************ 00:13:01.201 START TEST xnvme_bdevperf 00:13:01.201 ************************************ 00:13:01.201 02:08:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:01.201 02:08:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:01.201 02:08:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:01.201 02:08:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:01.201 02:08:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:01.201 02:08:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:01.201 02:08:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:01.201 02:08:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:01.201 { 00:13:01.201 "subsystems": [ 00:13:01.201 { 00:13:01.201 "subsystem": "bdev", 00:13:01.201 "config": [ 00:13:01.201 { 00:13:01.201 "params": { 00:13:01.201 "io_mechanism": "io_uring", 00:13:01.201 "conserve_cpu": false, 00:13:01.201 "filename": "/dev/nvme0n1", 00:13:01.201 "name": "xnvme_bdev" 00:13:01.201 }, 00:13:01.201 "method": "bdev_xnvme_create" 00:13:01.201 }, 00:13:01.201 { 00:13:01.201 "method": "bdev_wait_for_examine" 00:13:01.201 } 00:13:01.201 ] 00:13:01.201 } 00:13:01.201 ] 00:13:01.201 } 00:13:01.201 [2024-12-15 02:08:25.634316] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:01.201 [2024-12-15 02:08:25.634616] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71716 ] 00:13:01.201 [2024-12-15 02:08:25.796369] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:01.201 [2024-12-15 02:08:25.912572] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:01.462 Running I/O for 5 seconds... 00:13:03.848 33183.00 IOPS, 129.62 MiB/s [2024-12-15T02:08:29.565Z] 33071.50 IOPS, 129.19 MiB/s [2024-12-15T02:08:30.510Z] 34051.00 IOPS, 133.01 MiB/s [2024-12-15T02:08:31.454Z] 33883.75 IOPS, 132.36 MiB/s [2024-12-15T02:08:31.454Z] 33740.40 IOPS, 131.80 MiB/s 00:13:06.689 Latency(us) 00:13:06.689 [2024-12-15T02:08:31.454Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:06.689 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:06.689 xnvme_bdev : 5.01 33720.96 131.72 0.00 0.00 1893.97 349.74 11846.89 00:13:06.689 [2024-12-15T02:08:31.454Z] =================================================================================================================== 00:13:06.689 [2024-12-15T02:08:31.454Z] Total : 33720.96 131.72 0.00 0.00 1893.97 349.74 11846.89 00:13:07.261 02:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:07.261 02:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:07.261 02:08:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:07.261 02:08:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:07.261 02:08:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:07.521 { 00:13:07.521 "subsystems": [ 00:13:07.521 { 00:13:07.521 "subsystem": "bdev", 00:13:07.521 "config": [ 00:13:07.521 { 00:13:07.521 "params": { 00:13:07.521 "io_mechanism": "io_uring", 00:13:07.521 "conserve_cpu": false, 00:13:07.521 "filename": "/dev/nvme0n1", 00:13:07.521 "name": "xnvme_bdev" 00:13:07.521 }, 00:13:07.521 "method": "bdev_xnvme_create" 00:13:07.521 }, 00:13:07.521 { 00:13:07.521 "method": "bdev_wait_for_examine" 00:13:07.521 } 00:13:07.521 ] 00:13:07.521 } 00:13:07.521 ] 00:13:07.521 } 00:13:07.521 [2024-12-15 02:08:32.068267] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:07.521 [2024-12-15 02:08:32.068566] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid71791 ] 00:13:07.521 [2024-12-15 02:08:32.231712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:07.782 [2024-12-15 02:08:32.359076] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:08.042 Running I/O for 5 seconds... 00:13:09.925 34657.00 IOPS, 135.38 MiB/s [2024-12-15T02:08:36.075Z] 34364.50 IOPS, 134.24 MiB/s [2024-12-15T02:08:37.019Z] 34346.00 IOPS, 134.16 MiB/s [2024-12-15T02:08:37.962Z] 34326.75 IOPS, 134.09 MiB/s [2024-12-15T02:08:37.962Z] 34311.40 IOPS, 134.03 MiB/s 00:13:13.197 Latency(us) 00:13:13.197 [2024-12-15T02:08:37.963Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:13.198 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:13.198 xnvme_bdev : 5.00 34296.02 133.97 0.00 0.00 1861.92 340.28 10082.46 00:13:13.198 [2024-12-15T02:08:37.963Z] =================================================================================================================== 00:13:13.198 [2024-12-15T02:08:37.963Z] Total : 34296.02 133.97 0.00 0.00 1861.92 340.28 10082.46 00:13:13.768 ************************************ 00:13:13.768 END TEST xnvme_bdevperf 00:13:13.768 ************************************ 00:13:13.768 00:13:13.768 real 0m12.874s 00:13:13.768 user 0m6.064s 00:13:13.768 sys 0m6.529s 00:13:13.768 02:08:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:13.768 02:08:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:13.768 02:08:38 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:13.768 02:08:38 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:13.768 02:08:38 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:13.768 02:08:38 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:13.768 ************************************ 00:13:13.768 START TEST xnvme_fio_plugin 00:13:13.768 ************************************ 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:13.768 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:13.769 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:14.031 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:14.031 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:14.031 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:14.031 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:14.031 02:08:38 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:14.031 { 00:13:14.031 "subsystems": [ 00:13:14.031 { 00:13:14.031 "subsystem": "bdev", 00:13:14.031 "config": [ 00:13:14.031 { 00:13:14.031 "params": { 00:13:14.031 "io_mechanism": "io_uring", 00:13:14.031 "conserve_cpu": false, 00:13:14.031 "filename": "/dev/nvme0n1", 00:13:14.031 "name": "xnvme_bdev" 00:13:14.031 }, 00:13:14.031 "method": "bdev_xnvme_create" 00:13:14.031 }, 00:13:14.031 { 00:13:14.031 "method": "bdev_wait_for_examine" 00:13:14.031 } 00:13:14.031 ] 00:13:14.031 } 00:13:14.031 ] 00:13:14.031 } 00:13:14.031 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:14.031 fio-3.35 00:13:14.031 Starting 1 thread 00:13:20.618 00:13:20.618 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=71910: Sun Dec 15 02:08:44 2024 00:13:20.618 read: IOPS=32.7k, BW=128MiB/s (134MB/s)(640MiB/5002msec) 00:13:20.618 slat (usec): min=2, max=214, avg= 3.65, stdev= 2.54 00:13:20.618 clat (usec): min=1007, max=8719, avg=1806.91, stdev=316.89 00:13:20.618 lat (usec): min=1010, max=8732, avg=1810.56, stdev=317.31 00:13:20.618 clat percentiles (usec): 00:13:20.618 | 1.00th=[ 1287], 5.00th=[ 1401], 10.00th=[ 1483], 20.00th=[ 1565], 00:13:20.618 | 30.00th=[ 1631], 40.00th=[ 1696], 50.00th=[ 1762], 60.00th=[ 1844], 00:13:20.618 | 70.00th=[ 1909], 80.00th=[ 2024], 90.00th=[ 2180], 95.00th=[ 2311], 00:13:20.618 | 99.00th=[ 2638], 99.50th=[ 2802], 99.90th=[ 3228], 99.95th=[ 4146], 00:13:20.618 | 99.99th=[ 8717] 00:13:20.618 bw ( KiB/s): min=124416, max=136704, per=100.00%, avg=131072.00, stdev=3519.42, samples=9 00:13:20.618 iops : min=31104, max=34176, avg=32768.00, stdev=879.85, samples=9 00:13:20.618 lat (msec) : 2=78.37%, 4=21.56%, 10=0.07% 00:13:20.618 cpu : usr=30.77%, sys=67.55%, ctx=46, majf=0, minf=762 00:13:20.618 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:20.618 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:20.618 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:13:20.618 issued rwts: total=163712,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:20.618 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:20.618 00:13:20.618 Run status group 0 (all jobs): 00:13:20.618 READ: bw=128MiB/s (134MB/s), 128MiB/s-128MiB/s (134MB/s-134MB/s), io=640MiB (671MB), run=5002-5002msec 00:13:20.618 ----------------------------------------------------- 00:13:20.618 Suppressions used: 00:13:20.618 count bytes template 00:13:20.618 1 11 /usr/src/fio/parse.c 00:13:20.618 1 8 libtcmalloc_minimal.so 00:13:20.618 1 904 libcrypto.so 00:13:20.618 ----------------------------------------------------- 00:13:20.618 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:20.879 02:08:45 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:20.879 { 00:13:20.879 "subsystems": [ 00:13:20.879 { 00:13:20.879 "subsystem": "bdev", 00:13:20.879 "config": [ 00:13:20.879 { 00:13:20.879 "params": { 00:13:20.879 "io_mechanism": "io_uring", 00:13:20.879 "conserve_cpu": false, 00:13:20.879 "filename": "/dev/nvme0n1", 00:13:20.879 "name": "xnvme_bdev" 00:13:20.879 }, 00:13:20.879 "method": "bdev_xnvme_create" 00:13:20.879 }, 00:13:20.879 { 00:13:20.879 "method": "bdev_wait_for_examine" 00:13:20.879 } 00:13:20.879 ] 00:13:20.879 } 00:13:20.879 ] 00:13:20.879 } 00:13:20.879 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:20.879 fio-3.35 00:13:20.879 Starting 1 thread 00:13:27.469 00:13:27.469 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72002: Sun Dec 15 02:08:51 2024 00:13:27.469 write: IOPS=34.3k, BW=134MiB/s (141MB/s)(670MiB/5001msec); 0 zone resets 00:13:27.469 slat (usec): min=2, max=114, avg= 3.70, stdev= 1.92 00:13:27.469 clat (usec): min=416, max=4155, avg=1716.75, stdev=299.87 00:13:27.469 lat (usec): min=419, max=4159, avg=1720.45, stdev=300.20 00:13:27.469 clat percentiles (usec): 00:13:27.469 | 1.00th=[ 1172], 5.00th=[ 1287], 10.00th=[ 1369], 20.00th=[ 1467], 00:13:27.469 | 30.00th=[ 1549], 40.00th=[ 1614], 50.00th=[ 1680], 60.00th=[ 1762], 00:13:27.469 | 70.00th=[ 1844], 80.00th=[ 1942], 90.00th=[ 2114], 95.00th=[ 2245], 00:13:27.469 | 99.00th=[ 2573], 99.50th=[ 2737], 99.90th=[ 3359], 99.95th=[ 3589], 00:13:27.469 | 99.99th=[ 3851] 00:13:27.469 bw ( KiB/s): min=133416, max=150016, per=100.00%, avg=138041.67, stdev=5205.67, samples=9 00:13:27.469 iops : min=33354, max=37504, avg=34510.33, stdev=1301.45, samples=9 00:13:27.469 lat (usec) : 500=0.01%, 750=0.01%, 1000=0.01% 00:13:27.469 lat (msec) : 2=84.78%, 4=15.20%, 10=0.01% 00:13:27.469 cpu : usr=32.04%, sys=66.70%, ctx=15, majf=0, minf=763 00:13:27.469 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:27.469 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:27.469 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:27.469 issued rwts: total=0,171612,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:27.469 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:27.469 00:13:27.469 Run status group 0 (all jobs): 00:13:27.469 WRITE: bw=134MiB/s (141MB/s), 134MiB/s-134MiB/s (141MB/s-141MB/s), io=670MiB (703MB), run=5001-5001msec 00:13:27.730 ----------------------------------------------------- 00:13:27.730 Suppressions used: 00:13:27.730 count bytes template 00:13:27.730 1 11 /usr/src/fio/parse.c 00:13:27.730 1 8 libtcmalloc_minimal.so 00:13:27.730 1 904 libcrypto.so 00:13:27.730 ----------------------------------------------------- 00:13:27.730 00:13:27.730 ************************************ 00:13:27.730 END TEST xnvme_fio_plugin 00:13:27.730 ************************************ 00:13:27.730 00:13:27.730 real 0m13.773s 00:13:27.730 user 0m6.061s 00:13:27.730 sys 0m7.237s 00:13:27.730 02:08:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:27.730 02:08:52 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:27.730 02:08:52 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:27.730 02:08:52 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:13:27.730 02:08:52 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:13:27.730 02:08:52 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:27.730 02:08:52 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:27.730 02:08:52 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:27.730 02:08:52 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:27.730 ************************************ 00:13:27.730 START TEST xnvme_rpc 00:13:27.730 ************************************ 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:27.730 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72088 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72088 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72088 ']' 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:27.730 02:08:52 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:27.730 [2024-12-15 02:08:52.430988] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:27.730 [2024-12-15 02:08:52.431407] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72088 ] 00:13:27.991 [2024-12-15 02:08:52.600690] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:27.991 [2024-12-15 02:08:52.719319] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/nvme0n1 xnvme_bdev io_uring -c 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.932 xnvme_bdev 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/nvme0n1 == \/\d\e\v\/\n\v\m\e\0\n\1 ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring == \i\o\_\u\r\i\n\g ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72088 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72088 ']' 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72088 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72088 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:28.932 killing process with pid 72088 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72088' 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72088 00:13:28.932 02:08:53 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72088 00:13:30.846 ************************************ 00:13:30.846 END TEST xnvme_rpc 00:13:30.846 ************************************ 00:13:30.846 00:13:30.846 real 0m2.902s 00:13:30.846 user 0m2.908s 00:13:30.846 sys 0m0.477s 00:13:30.846 02:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:30.846 02:08:55 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:30.846 02:08:55 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:13:30.846 02:08:55 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:30.846 02:08:55 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:30.846 02:08:55 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:30.846 ************************************ 00:13:30.846 START TEST xnvme_bdevperf 00:13:30.846 ************************************ 00:13:30.846 02:08:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:13:30.846 02:08:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:13:30.846 02:08:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring 00:13:30.846 02:08:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:30.846 02:08:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:13:30.846 02:08:55 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:30.846 02:08:55 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:30.846 02:08:55 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:30.846 { 00:13:30.846 "subsystems": [ 00:13:30.846 { 00:13:30.846 "subsystem": "bdev", 00:13:30.846 "config": [ 00:13:30.846 { 00:13:30.846 "params": { 00:13:30.846 "io_mechanism": "io_uring", 00:13:30.846 "conserve_cpu": true, 00:13:30.846 "filename": "/dev/nvme0n1", 00:13:30.846 "name": "xnvme_bdev" 00:13:30.846 }, 00:13:30.846 "method": "bdev_xnvme_create" 00:13:30.846 }, 00:13:30.846 { 00:13:30.846 "method": "bdev_wait_for_examine" 00:13:30.846 } 00:13:30.846 ] 00:13:30.846 } 00:13:30.846 ] 00:13:30.846 } 00:13:30.846 [2024-12-15 02:08:55.395737] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:30.846 [2024-12-15 02:08:55.396047] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72157 ] 00:13:30.846 [2024-12-15 02:08:55.560591] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:31.106 [2024-12-15 02:08:55.679655] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:31.367 Running I/O for 5 seconds... 00:13:33.302 34234.00 IOPS, 133.73 MiB/s [2024-12-15T02:08:59.011Z] 33903.50 IOPS, 132.44 MiB/s [2024-12-15T02:09:00.398Z] 33563.33 IOPS, 131.11 MiB/s [2024-12-15T02:09:01.343Z] 33506.75 IOPS, 130.89 MiB/s [2024-12-15T02:09:01.343Z] 33584.40 IOPS, 131.19 MiB/s 00:13:36.578 Latency(us) 00:13:36.578 [2024-12-15T02:09:01.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:36.578 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:13:36.578 xnvme_bdev : 5.01 33549.20 131.05 0.00 0.00 1903.76 907.42 7360.20 00:13:36.578 [2024-12-15T02:09:01.343Z] =================================================================================================================== 00:13:36.578 [2024-12-15T02:09:01.343Z] Total : 33549.20 131.05 0.00 0.00 1903.76 907.42 7360.20 00:13:37.150 02:09:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:37.150 02:09:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:13:37.150 02:09:01 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:13:37.150 02:09:01 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:13:37.150 02:09:01 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:37.150 { 00:13:37.150 "subsystems": [ 00:13:37.150 { 00:13:37.150 "subsystem": "bdev", 00:13:37.150 "config": [ 00:13:37.150 { 00:13:37.150 "params": { 00:13:37.150 "io_mechanism": "io_uring", 00:13:37.150 "conserve_cpu": true, 00:13:37.150 "filename": "/dev/nvme0n1", 00:13:37.150 "name": "xnvme_bdev" 00:13:37.150 }, 00:13:37.150 "method": "bdev_xnvme_create" 00:13:37.150 }, 00:13:37.150 { 00:13:37.150 "method": "bdev_wait_for_examine" 00:13:37.150 } 00:13:37.150 ] 00:13:37.150 } 00:13:37.150 ] 00:13:37.150 } 00:13:37.150 [2024-12-15 02:09:01.867052] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:37.150 [2024-12-15 02:09:01.867213] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72232 ] 00:13:37.412 [2024-12-15 02:09:02.031843] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:37.412 [2024-12-15 02:09:02.160297] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:37.985 Running I/O for 5 seconds... 00:13:39.868 33973.00 IOPS, 132.71 MiB/s [2024-12-15T02:09:05.573Z] 34150.50 IOPS, 133.40 MiB/s [2024-12-15T02:09:06.516Z] 34183.67 IOPS, 133.53 MiB/s [2024-12-15T02:09:07.902Z] 34134.75 IOPS, 133.34 MiB/s 00:13:43.137 Latency(us) 00:13:43.137 [2024-12-15T02:09:07.902Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:13:43.137 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:13:43.137 xnvme_bdev : 5.00 34314.41 134.04 0.00 0.00 1860.84 329.26 9578.34 00:13:43.137 [2024-12-15T02:09:07.902Z] =================================================================================================================== 00:13:43.137 [2024-12-15T02:09:07.902Z] Total : 34314.41 134.04 0.00 0.00 1860.84 329.26 9578.34 00:13:43.710 00:13:43.710 real 0m13.011s 00:13:43.710 user 0m8.522s 00:13:43.710 sys 0m3.913s 00:13:43.710 ************************************ 00:13:43.710 02:09:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:43.710 02:09:08 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:13:43.710 END TEST xnvme_bdevperf 00:13:43.710 ************************************ 00:13:43.710 02:09:08 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:13:43.710 02:09:08 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:43.710 02:09:08 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:43.710 02:09:08 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:43.710 ************************************ 00:13:43.710 START TEST xnvme_fio_plugin 00:13:43.710 ************************************ 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_fio 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:43.710 02:09:08 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:43.710 { 00:13:43.710 "subsystems": [ 00:13:43.710 { 00:13:43.710 "subsystem": "bdev", 00:13:43.710 "config": [ 00:13:43.710 { 00:13:43.710 "params": { 00:13:43.710 "io_mechanism": "io_uring", 00:13:43.710 "conserve_cpu": true, 00:13:43.710 "filename": "/dev/nvme0n1", 00:13:43.710 "name": "xnvme_bdev" 00:13:43.710 }, 00:13:43.710 "method": "bdev_xnvme_create" 00:13:43.710 }, 00:13:43.710 { 00:13:43.710 "method": "bdev_wait_for_examine" 00:13:43.710 } 00:13:43.710 ] 00:13:43.710 } 00:13:43.710 ] 00:13:43.710 } 00:13:43.971 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:43.971 fio-3.35 00:13:43.971 Starting 1 thread 00:13:50.558 00:13:50.558 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72351: Sun Dec 15 02:09:14 2024 00:13:50.558 read: IOPS=33.0k, BW=129MiB/s (135MB/s)(645MiB/5002msec) 00:13:50.558 slat (usec): min=2, max=233, avg= 3.55, stdev= 2.30 00:13:50.558 clat (usec): min=889, max=8612, avg=1792.10, stdev=272.71 00:13:50.558 lat (usec): min=892, max=8615, avg=1795.65, stdev=273.04 00:13:50.558 clat percentiles (usec): 00:13:50.558 | 1.00th=[ 1303], 5.00th=[ 1401], 10.00th=[ 1467], 20.00th=[ 1565], 00:13:50.558 | 30.00th=[ 1631], 40.00th=[ 1696], 50.00th=[ 1762], 60.00th=[ 1827], 00:13:50.558 | 70.00th=[ 1909], 80.00th=[ 2008], 90.00th=[ 2147], 95.00th=[ 2278], 00:13:50.558 | 99.00th=[ 2573], 99.50th=[ 2704], 99.90th=[ 3195], 99.95th=[ 3294], 00:13:50.558 | 99.99th=[ 3589] 00:13:50.558 bw ( KiB/s): min=130299, max=134144, per=99.90%, avg=132007.44, stdev=1532.54, samples=9 00:13:50.558 iops : min=32574, max=33536, avg=33001.78, stdev=383.24, samples=9 00:13:50.558 lat (usec) : 1000=0.02% 00:13:50.558 lat (msec) : 2=80.07%, 4=19.91%, 10=0.01% 00:13:50.558 cpu : usr=60.13%, sys=35.69%, ctx=41, majf=0, minf=762 00:13:50.558 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:13:50.558 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:50.558 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:50.558 issued rwts: total=165234,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:50.558 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:50.558 00:13:50.558 Run status group 0 (all jobs): 00:13:50.558 READ: bw=129MiB/s (135MB/s), 129MiB/s-129MiB/s (135MB/s-135MB/s), io=645MiB (677MB), run=5002-5002msec 00:13:50.558 ----------------------------------------------------- 00:13:50.558 Suppressions used: 00:13:50.558 count bytes template 00:13:50.558 1 11 /usr/src/fio/parse.c 00:13:50.558 1 8 libtcmalloc_minimal.so 00:13:50.558 1 904 libcrypto.so 00:13:50.558 ----------------------------------------------------- 00:13:50.558 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:13:50.558 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:13:50.819 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:13:50.819 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:13:50.819 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:13:50.819 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:13:50.819 02:09:15 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:13:50.819 { 00:13:50.819 "subsystems": [ 00:13:50.819 { 00:13:50.819 "subsystem": "bdev", 00:13:50.819 "config": [ 00:13:50.819 { 00:13:50.819 "params": { 00:13:50.819 "io_mechanism": "io_uring", 00:13:50.819 "conserve_cpu": true, 00:13:50.819 "filename": "/dev/nvme0n1", 00:13:50.819 "name": "xnvme_bdev" 00:13:50.819 }, 00:13:50.819 "method": "bdev_xnvme_create" 00:13:50.819 }, 00:13:50.819 { 00:13:50.819 "method": "bdev_wait_for_examine" 00:13:50.819 } 00:13:50.819 ] 00:13:50.819 } 00:13:50.819 ] 00:13:50.819 } 00:13:50.819 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:13:50.819 fio-3.35 00:13:50.819 Starting 1 thread 00:13:57.401 00:13:57.401 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72443: Sun Dec 15 02:09:21 2024 00:13:57.401 write: IOPS=35.3k, BW=138MiB/s (144MB/s)(689MiB/5002msec); 0 zone resets 00:13:57.401 slat (usec): min=2, max=109, avg= 3.60, stdev= 1.69 00:13:57.401 clat (usec): min=680, max=5000, avg=1670.58, stdev=263.13 00:13:57.401 lat (usec): min=684, max=5004, avg=1674.18, stdev=263.36 00:13:57.401 clat percentiles (usec): 00:13:57.401 | 1.00th=[ 1172], 5.00th=[ 1287], 10.00th=[ 1352], 20.00th=[ 1450], 00:13:57.401 | 30.00th=[ 1516], 40.00th=[ 1582], 50.00th=[ 1647], 60.00th=[ 1713], 00:13:57.401 | 70.00th=[ 1795], 80.00th=[ 1876], 90.00th=[ 2008], 95.00th=[ 2147], 00:13:57.401 | 99.00th=[ 2376], 99.50th=[ 2474], 99.90th=[ 2900], 99.95th=[ 3359], 00:13:57.401 | 99.99th=[ 3818] 00:13:57.401 bw ( KiB/s): min=132822, max=161272, per=100.00%, avg=141225.56, stdev=9909.84, samples=9 00:13:57.401 iops : min=33205, max=40318, avg=35306.33, stdev=2477.51, samples=9 00:13:57.401 lat (usec) : 750=0.01%, 1000=0.02% 00:13:57.401 lat (msec) : 2=89.82%, 4=10.15%, 10=0.01% 00:13:57.401 cpu : usr=65.11%, sys=31.27%, ctx=15, majf=0, minf=763 00:13:57.401 IO depths : 1=1.5%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.1%, >=64=1.6% 00:13:57.401 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:13:57.401 complete : 0=0.0%, 4=98.5%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:13:57.401 issued rwts: total=0,176349,0,0 short=0,0,0,0 dropped=0,0,0,0 00:13:57.401 latency : target=0, window=0, percentile=100.00%, depth=64 00:13:57.401 00:13:57.401 Run status group 0 (all jobs): 00:13:57.401 WRITE: bw=138MiB/s (144MB/s), 138MiB/s-138MiB/s (144MB/s-144MB/s), io=689MiB (722MB), run=5002-5002msec 00:13:57.662 ----------------------------------------------------- 00:13:57.662 Suppressions used: 00:13:57.662 count bytes template 00:13:57.662 1 11 /usr/src/fio/parse.c 00:13:57.662 1 8 libtcmalloc_minimal.so 00:13:57.662 1 904 libcrypto.so 00:13:57.662 ----------------------------------------------------- 00:13:57.662 00:13:57.662 00:13:57.662 real 0m13.796s 00:13:57.662 user 0m9.118s 00:13:57.662 sys 0m3.954s 00:13:57.662 ************************************ 00:13:57.662 END TEST xnvme_fio_plugin 00:13:57.662 ************************************ 00:13:57.662 02:09:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:13:57.662 02:09:22 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@75 -- # for io in "${xnvme_io[@]}" 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@76 -- # method_bdev_xnvme_create_0["io_mechanism"]=io_uring_cmd 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@77 -- # method_bdev_xnvme_create_0["filename"]=/dev/ng0n1 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@79 -- # filename=/dev/ng0n1 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@80 -- # name=xnvme_bdev 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=false 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=false 00:13:57.662 02:09:22 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:13:57.662 02:09:22 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:13:57.662 02:09:22 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:13:57.662 02:09:22 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:13:57.662 ************************************ 00:13:57.662 START TEST xnvme_rpc 00:13:57.662 ************************************ 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=72529 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 72529 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 72529 ']' 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:13:57.662 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:13:57.662 02:09:22 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:57.662 [2024-12-15 02:09:22.359957] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:13:57.662 [2024-12-15 02:09:22.360098] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72529 ] 00:13:57.924 [2024-12-15 02:09:22.521130] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:13:57.924 [2024-12-15 02:09:22.640268] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd '' 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.867 xnvme_bdev 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ false == \f\a\l\s\e ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 72529 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 72529 ']' 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 72529 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 72529 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:13:58.867 killing process with pid 72529 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 72529' 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 72529 00:13:58.867 02:09:23 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 72529 00:14:00.783 ************************************ 00:14:00.784 END TEST xnvme_rpc 00:14:00.784 ************************************ 00:14:00.784 00:14:00.784 real 0m2.875s 00:14:00.784 user 0m2.895s 00:14:00.784 sys 0m0.472s 00:14:00.784 02:09:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:00.784 02:09:25 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 02:09:25 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:00.784 02:09:25 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:00.784 02:09:25 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:00.784 02:09:25 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 ************************************ 00:14:00.784 START TEST xnvme_bdevperf 00:14:00.784 ************************************ 00:14:00.784 02:09:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:00.784 02:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:00.784 02:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:00.784 02:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:00.784 02:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:00.784 02:09:25 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:00.784 02:09:25 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:00.784 02:09:25 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:00.784 { 00:14:00.784 "subsystems": [ 00:14:00.784 { 00:14:00.784 "subsystem": "bdev", 00:14:00.784 "config": [ 00:14:00.784 { 00:14:00.784 "params": { 00:14:00.784 "io_mechanism": "io_uring_cmd", 00:14:00.784 "conserve_cpu": false, 00:14:00.784 "filename": "/dev/ng0n1", 00:14:00.784 "name": "xnvme_bdev" 00:14:00.784 }, 00:14:00.784 "method": "bdev_xnvme_create" 00:14:00.784 }, 00:14:00.784 { 00:14:00.784 "method": "bdev_wait_for_examine" 00:14:00.784 } 00:14:00.784 ] 00:14:00.784 } 00:14:00.784 ] 00:14:00.784 } 00:14:00.784 [2024-12-15 02:09:25.285826] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:00.784 [2024-12-15 02:09:25.285970] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72602 ] 00:14:00.784 [2024-12-15 02:09:25.449890] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:01.045 [2024-12-15 02:09:25.567721] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:01.306 Running I/O for 5 seconds... 00:14:03.259 33689.00 IOPS, 131.60 MiB/s [2024-12-15T02:09:28.968Z] 33779.50 IOPS, 131.95 MiB/s [2024-12-15T02:09:29.911Z] 35106.33 IOPS, 137.13 MiB/s [2024-12-15T02:09:31.299Z] 35689.75 IOPS, 139.41 MiB/s [2024-12-15T02:09:31.299Z] 35399.80 IOPS, 138.28 MiB/s 00:14:06.534 Latency(us) 00:14:06.534 [2024-12-15T02:09:31.299Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:06.534 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:06.534 xnvme_bdev : 5.00 35388.17 138.24 0.00 0.00 1804.76 1020.85 7662.67 00:14:06.534 [2024-12-15T02:09:31.299Z] =================================================================================================================== 00:14:06.534 [2024-12-15T02:09:31.299Z] Total : 35388.17 138.24 0.00 0.00 1804.76 1020.85 7662.67 00:14:07.108 02:09:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:07.108 02:09:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:07.108 02:09:31 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:07.108 02:09:31 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:07.108 02:09:31 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:07.108 { 00:14:07.108 "subsystems": [ 00:14:07.108 { 00:14:07.108 "subsystem": "bdev", 00:14:07.108 "config": [ 00:14:07.108 { 00:14:07.108 "params": { 00:14:07.108 "io_mechanism": "io_uring_cmd", 00:14:07.108 "conserve_cpu": false, 00:14:07.108 "filename": "/dev/ng0n1", 00:14:07.108 "name": "xnvme_bdev" 00:14:07.108 }, 00:14:07.108 "method": "bdev_xnvme_create" 00:14:07.108 }, 00:14:07.108 { 00:14:07.108 "method": "bdev_wait_for_examine" 00:14:07.108 } 00:14:07.108 ] 00:14:07.108 } 00:14:07.108 ] 00:14:07.108 } 00:14:07.108 [2024-12-15 02:09:31.730838] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:07.108 [2024-12-15 02:09:31.730998] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72672 ] 00:14:07.370 [2024-12-15 02:09:31.892165] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:07.370 [2024-12-15 02:09:32.012828] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:07.632 Running I/O for 5 seconds... 00:14:09.963 34164.00 IOPS, 133.45 MiB/s [2024-12-15T02:09:35.675Z] 34390.50 IOPS, 134.34 MiB/s [2024-12-15T02:09:36.619Z] 34632.00 IOPS, 135.28 MiB/s [2024-12-15T02:09:37.564Z] 34562.75 IOPS, 135.01 MiB/s [2024-12-15T02:09:37.564Z] 34571.20 IOPS, 135.04 MiB/s 00:14:12.799 Latency(us) 00:14:12.799 [2024-12-15T02:09:37.564Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:12.799 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:12.799 xnvme_bdev : 5.01 34539.52 134.92 0.00 0.00 1847.83 354.46 7612.26 00:14:12.799 [2024-12-15T02:09:37.564Z] =================================================================================================================== 00:14:12.799 [2024-12-15T02:09:37.564Z] Total : 34539.52 134.92 0.00 0.00 1847.83 354.46 7612.26 00:14:13.372 02:09:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:13.372 02:09:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:13.372 02:09:38 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:13.372 02:09:38 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:13.372 02:09:38 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:13.633 { 00:14:13.633 "subsystems": [ 00:14:13.633 { 00:14:13.633 "subsystem": "bdev", 00:14:13.633 "config": [ 00:14:13.633 { 00:14:13.633 "params": { 00:14:13.633 "io_mechanism": "io_uring_cmd", 00:14:13.633 "conserve_cpu": false, 00:14:13.633 "filename": "/dev/ng0n1", 00:14:13.633 "name": "xnvme_bdev" 00:14:13.633 }, 00:14:13.633 "method": "bdev_xnvme_create" 00:14:13.633 }, 00:14:13.633 { 00:14:13.633 "method": "bdev_wait_for_examine" 00:14:13.633 } 00:14:13.633 ] 00:14:13.633 } 00:14:13.633 ] 00:14:13.633 } 00:14:13.633 [2024-12-15 02:09:38.174958] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:13.633 [2024-12-15 02:09:38.175109] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72746 ] 00:14:13.633 [2024-12-15 02:09:38.338219] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:13.895 [2024-12-15 02:09:38.467884] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:14.157 Running I/O for 5 seconds... 00:14:16.044 78912.00 IOPS, 308.25 MiB/s [2024-12-15T02:09:42.197Z] 77888.00 IOPS, 304.25 MiB/s [2024-12-15T02:09:42.768Z] 77888.00 IOPS, 304.25 MiB/s [2024-12-15T02:09:44.149Z] 78576.00 IOPS, 306.94 MiB/s 00:14:19.384 Latency(us) 00:14:19.384 [2024-12-15T02:09:44.149Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:19.384 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:14:19.384 xnvme_bdev : 5.00 81298.82 317.57 0.00 0.00 783.84 491.52 3604.48 00:14:19.384 [2024-12-15T02:09:44.149Z] =================================================================================================================== 00:14:19.384 [2024-12-15T02:09:44.149Z] Total : 81298.82 317.57 0.00 0.00 783.84 491.52 3604.48 00:14:19.956 02:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:19.956 02:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:14:19.956 02:09:44 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:19.956 02:09:44 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:19.956 02:09:44 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:19.956 { 00:14:19.956 "subsystems": [ 00:14:19.956 { 00:14:19.956 "subsystem": "bdev", 00:14:19.956 "config": [ 00:14:19.956 { 00:14:19.956 "params": { 00:14:19.956 "io_mechanism": "io_uring_cmd", 00:14:19.956 "conserve_cpu": false, 00:14:19.956 "filename": "/dev/ng0n1", 00:14:19.956 "name": "xnvme_bdev" 00:14:19.956 }, 00:14:19.956 "method": "bdev_xnvme_create" 00:14:19.956 }, 00:14:19.956 { 00:14:19.956 "method": "bdev_wait_for_examine" 00:14:19.956 } 00:14:19.956 ] 00:14:19.956 } 00:14:19.956 ] 00:14:19.956 } 00:14:19.956 [2024-12-15 02:09:44.623070] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:19.956 [2024-12-15 02:09:44.623242] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid72826 ] 00:14:20.217 [2024-12-15 02:09:44.787436] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:20.217 [2024-12-15 02:09:44.905875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:20.479 Running I/O for 5 seconds... 00:14:22.799 41793.00 IOPS, 163.25 MiB/s [2024-12-15T02:09:48.505Z] 43047.00 IOPS, 168.15 MiB/s [2024-12-15T02:09:49.441Z] 41886.33 IOPS, 163.62 MiB/s [2024-12-15T02:09:50.490Z] 41052.50 IOPS, 160.36 MiB/s [2024-12-15T02:09:50.490Z] 40422.60 IOPS, 157.90 MiB/s 00:14:25.725 Latency(us) 00:14:25.725 [2024-12-15T02:09:50.490Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:25.725 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:14:25.725 xnvme_bdev : 5.00 40399.30 157.81 0.00 0.00 1579.76 241.03 18249.26 00:14:25.725 [2024-12-15T02:09:50.490Z] =================================================================================================================== 00:14:25.725 [2024-12-15T02:09:50.490Z] Total : 40399.30 157.81 0.00 0.00 1579.76 241.03 18249.26 00:14:26.295 00:14:26.295 real 0m25.774s 00:14:26.295 user 0m13.820s 00:14:26.295 sys 0m11.441s 00:14:26.295 ************************************ 00:14:26.295 END TEST xnvme_bdevperf 00:14:26.295 ************************************ 00:14:26.295 02:09:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:26.295 02:09:50 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:26.295 02:09:51 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:14:26.295 02:09:51 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:26.295 02:09:51 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:26.295 02:09:51 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:26.295 ************************************ 00:14:26.295 START TEST xnvme_fio_plugin 00:14:26.295 ************************************ 00:14:26.295 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:14:26.295 02:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:14:26.295 02:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:14:26.295 02:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:26.555 02:09:51 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:26.555 { 00:14:26.555 "subsystems": [ 00:14:26.555 { 00:14:26.555 "subsystem": "bdev", 00:14:26.555 "config": [ 00:14:26.555 { 00:14:26.555 "params": { 00:14:26.555 "io_mechanism": "io_uring_cmd", 00:14:26.555 "conserve_cpu": false, 00:14:26.555 "filename": "/dev/ng0n1", 00:14:26.555 "name": "xnvme_bdev" 00:14:26.555 }, 00:14:26.555 "method": "bdev_xnvme_create" 00:14:26.555 }, 00:14:26.555 { 00:14:26.555 "method": "bdev_wait_for_examine" 00:14:26.555 } 00:14:26.555 ] 00:14:26.555 } 00:14:26.555 ] 00:14:26.555 } 00:14:26.555 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:26.555 fio-3.35 00:14:26.555 Starting 1 thread 00:14:33.141 00:14:33.141 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=72944: Sun Dec 15 02:09:56 2024 00:14:33.141 read: IOPS=36.7k, BW=143MiB/s (150MB/s)(717MiB/5001msec) 00:14:33.141 slat (usec): min=2, max=572, avg= 4.00, stdev= 3.11 00:14:33.141 clat (usec): min=913, max=7581, avg=1581.33, stdev=264.55 00:14:33.141 lat (usec): min=916, max=7583, avg=1585.33, stdev=265.10 00:14:33.141 clat percentiles (usec): 00:14:33.141 | 1.00th=[ 1106], 5.00th=[ 1221], 10.00th=[ 1287], 20.00th=[ 1369], 00:14:33.141 | 30.00th=[ 1434], 40.00th=[ 1483], 50.00th=[ 1549], 60.00th=[ 1614], 00:14:33.141 | 70.00th=[ 1680], 80.00th=[ 1778], 90.00th=[ 1942], 95.00th=[ 2073], 00:14:33.141 | 99.00th=[ 2343], 99.50th=[ 2474], 99.90th=[ 2769], 99.95th=[ 2933], 00:14:33.141 | 99.99th=[ 3425] 00:14:33.141 bw ( KiB/s): min=140288, max=166888, per=99.84%, avg=146622.22, stdev=8206.27, samples=9 00:14:33.141 iops : min=35072, max=41722, avg=36655.56, stdev=2051.57, samples=9 00:14:33.141 lat (usec) : 1000=0.06% 00:14:33.141 lat (msec) : 2=92.82%, 4=7.12%, 10=0.01% 00:14:33.141 cpu : usr=32.72%, sys=65.56%, ctx=23, majf=0, minf=762 00:14:33.141 IO depths : 1=1.6%, 2=3.1%, 4=6.3%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:14:33.141 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:33.141 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:14:33.141 issued rwts: total=183610,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:33.141 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:33.141 00:14:33.141 Run status group 0 (all jobs): 00:14:33.141 READ: bw=143MiB/s (150MB/s), 143MiB/s-143MiB/s (150MB/s-150MB/s), io=717MiB (752MB), run=5001-5001msec 00:14:33.402 ----------------------------------------------------- 00:14:33.402 Suppressions used: 00:14:33.402 count bytes template 00:14:33.402 1 11 /usr/src/fio/parse.c 00:14:33.402 1 8 libtcmalloc_minimal.so 00:14:33.402 1 904 libcrypto.so 00:14:33.402 ----------------------------------------------------- 00:14:33.402 00:14:33.402 02:09:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:33.402 02:09:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.402 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.402 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:14:33.402 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:14:33.402 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:14:33.402 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:14:33.403 02:09:57 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:14:33.403 { 00:14:33.403 "subsystems": [ 00:14:33.403 { 00:14:33.403 "subsystem": "bdev", 00:14:33.403 "config": [ 00:14:33.403 { 00:14:33.403 "params": { 00:14:33.403 "io_mechanism": "io_uring_cmd", 00:14:33.403 "conserve_cpu": false, 00:14:33.403 "filename": "/dev/ng0n1", 00:14:33.403 "name": "xnvme_bdev" 00:14:33.403 }, 00:14:33.403 "method": "bdev_xnvme_create" 00:14:33.403 }, 00:14:33.403 { 00:14:33.403 "method": "bdev_wait_for_examine" 00:14:33.403 } 00:14:33.403 ] 00:14:33.403 } 00:14:33.403 ] 00:14:33.403 } 00:14:33.403 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:14:33.403 fio-3.35 00:14:33.403 Starting 1 thread 00:14:39.987 00:14:39.987 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73029: Sun Dec 15 02:10:03 2024 00:14:39.987 write: IOPS=39.9k, BW=156MiB/s (164MB/s)(781MiB/5004msec); 0 zone resets 00:14:39.987 slat (usec): min=2, max=240, avg= 3.73, stdev= 2.36 00:14:39.987 clat (usec): min=147, max=6741, avg=1460.06, stdev=328.49 00:14:39.987 lat (usec): min=151, max=6745, avg=1463.79, stdev=328.82 00:14:39.987 clat percentiles (usec): 00:14:39.987 | 1.00th=[ 816], 5.00th=[ 1037], 10.00th=[ 1106], 20.00th=[ 1188], 00:14:39.987 | 30.00th=[ 1270], 40.00th=[ 1336], 50.00th=[ 1418], 60.00th=[ 1500], 00:14:39.987 | 70.00th=[ 1598], 80.00th=[ 1713], 90.00th=[ 1876], 95.00th=[ 2008], 00:14:39.987 | 99.00th=[ 2343], 99.50th=[ 2540], 99.90th=[ 3195], 99.95th=[ 3818], 00:14:39.987 | 99.99th=[ 5342] 00:14:39.987 bw ( KiB/s): min=135336, max=180992, per=99.38%, avg=158758.22, stdev=17819.56, samples=9 00:14:39.987 iops : min=33834, max=45248, avg=39689.56, stdev=4454.89, samples=9 00:14:39.987 lat (usec) : 250=0.01%, 500=0.25%, 750=0.35%, 1000=3.08% 00:14:39.987 lat (msec) : 2=90.95%, 4=5.34%, 10=0.03% 00:14:39.987 cpu : usr=39.12%, sys=59.30%, ctx=25, majf=0, minf=763 00:14:39.987 IO depths : 1=1.4%, 2=2.8%, 4=5.6%, 8=11.3%, 16=23.0%, 32=54.1%, >=64=1.8% 00:14:39.987 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:14:39.987 complete : 0=0.0%, 4=98.3%, 8=0.1%, 16=0.1%, 32=0.2%, 64=1.4%, >=64=0.0% 00:14:39.987 issued rwts: total=0,199848,0,0 short=0,0,0,0 dropped=0,0,0,0 00:14:39.987 latency : target=0, window=0, percentile=100.00%, depth=64 00:14:39.987 00:14:39.987 Run status group 0 (all jobs): 00:14:39.987 WRITE: bw=156MiB/s (164MB/s), 156MiB/s-156MiB/s (164MB/s-164MB/s), io=781MiB (819MB), run=5004-5004msec 00:14:40.249 ----------------------------------------------------- 00:14:40.249 Suppressions used: 00:14:40.249 count bytes template 00:14:40.249 1 11 /usr/src/fio/parse.c 00:14:40.249 1 8 libtcmalloc_minimal.so 00:14:40.249 1 904 libcrypto.so 00:14:40.249 ----------------------------------------------------- 00:14:40.249 00:14:40.249 00:14:40.249 real 0m13.814s 00:14:40.249 user 0m6.476s 00:14:40.249 sys 0m6.842s 00:14:40.249 02:10:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:40.249 ************************************ 00:14:40.249 END TEST xnvme_fio_plugin 00:14:40.249 ************************************ 00:14:40.249 02:10:04 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:14:40.249 02:10:04 nvme_xnvme -- xnvme/xnvme.sh@82 -- # for cc in "${xnvme_conserve_cpu[@]}" 00:14:40.249 02:10:04 nvme_xnvme -- xnvme/xnvme.sh@83 -- # method_bdev_xnvme_create_0["conserve_cpu"]=true 00:14:40.249 02:10:04 nvme_xnvme -- xnvme/xnvme.sh@84 -- # conserve_cpu=true 00:14:40.249 02:10:04 nvme_xnvme -- xnvme/xnvme.sh@86 -- # run_test xnvme_rpc xnvme_rpc 00:14:40.249 02:10:04 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:40.249 02:10:04 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:40.249 02:10:04 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:40.249 ************************************ 00:14:40.249 START TEST xnvme_rpc 00:14:40.249 ************************************ 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1129 -- # xnvme_rpc 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # cc=() 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@48 -- # local -A cc 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["false"]= 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@50 -- # cc["true"]=-c 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@53 -- # spdk_tgt=73115 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@54 -- # waitforlisten 73115 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@835 -- # '[' -z 73115 ']' 00:14:40.249 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@840 -- # local max_retries=100 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@844 -- # xtrace_disable 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:40.249 02:10:04 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@52 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:14:40.508 [2024-12-15 02:10:05.029627] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:40.508 [2024-12-15 02:10:05.029773] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73115 ] 00:14:40.508 [2024-12-15 02:10:05.192362] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:40.767 [2024-12-15 02:10:05.315257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:41.337 02:10:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:14:41.337 02:10:05 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@868 -- # return 0 00:14:41.337 02:10:05 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@56 -- # rpc_cmd bdev_xnvme_create /dev/ng0n1 xnvme_bdev io_uring_cmd -c 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.337 xnvme_bdev 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # rpc_xnvme name 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.name' 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@62 -- # [[ xnvme_bdev == \x\n\v\m\e\_\b\d\e\v ]] 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # rpc_xnvme filename 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.filename' 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@63 -- # [[ /dev/ng0n1 == \/\d\e\v\/\n\g\0\n\1 ]] 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # rpc_xnvme io_mechanism 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.337 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.338 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.io_mechanism' 00:14:41.338 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@64 -- # [[ io_uring_cmd == \i\o\_\u\r\i\n\g\_\c\m\d ]] 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # rpc_xnvme conserve_cpu 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@65 -- # rpc_cmd framework_get_config bdev 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/common.sh@66 -- # jq -r '.[] | select(.method == "bdev_xnvme_create").params.conserve_cpu' 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@65 -- # [[ true == \t\r\u\e ]] 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@67 -- # rpc_cmd bdev_xnvme_delete xnvme_bdev 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@563 -- # xtrace_disable 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- xnvme/xnvme.sh@70 -- # killprocess 73115 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@954 -- # '[' -z 73115 ']' 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@958 -- # kill -0 73115 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # uname 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73115 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:14:41.598 killing process with pid 73115 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73115' 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@973 -- # kill 73115 00:14:41.598 02:10:06 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@978 -- # wait 73115 00:14:43.512 00:14:43.512 real 0m2.902s 00:14:43.512 user 0m2.906s 00:14:43.512 sys 0m0.483s 00:14:43.512 02:10:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@1130 -- # xtrace_disable 00:14:43.512 02:10:07 nvme_xnvme.xnvme_rpc -- common/autotest_common.sh@10 -- # set +x 00:14:43.512 ************************************ 00:14:43.512 END TEST xnvme_rpc 00:14:43.512 ************************************ 00:14:43.512 02:10:07 nvme_xnvme -- xnvme/xnvme.sh@87 -- # run_test xnvme_bdevperf xnvme_bdevperf 00:14:43.512 02:10:07 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:14:43.512 02:10:07 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:14:43.512 02:10:07 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:14:43.512 ************************************ 00:14:43.512 START TEST xnvme_bdevperf 00:14:43.512 ************************************ 00:14:43.512 02:10:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1129 -- # xnvme_bdevperf 00:14:43.512 02:10:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@12 -- # local io_pattern 00:14:43.512 02:10:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@13 -- # local -n io_pattern_ref=io_uring_cmd 00:14:43.512 02:10:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:43.512 02:10:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randread -t 5 -T xnvme_bdev -o 4096 00:14:43.512 02:10:07 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:43.512 02:10:07 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:43.512 02:10:07 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:43.512 { 00:14:43.512 "subsystems": [ 00:14:43.512 { 00:14:43.512 "subsystem": "bdev", 00:14:43.512 "config": [ 00:14:43.512 { 00:14:43.512 "params": { 00:14:43.512 "io_mechanism": "io_uring_cmd", 00:14:43.512 "conserve_cpu": true, 00:14:43.512 "filename": "/dev/ng0n1", 00:14:43.512 "name": "xnvme_bdev" 00:14:43.512 }, 00:14:43.512 "method": "bdev_xnvme_create" 00:14:43.512 }, 00:14:43.512 { 00:14:43.512 "method": "bdev_wait_for_examine" 00:14:43.512 } 00:14:43.512 ] 00:14:43.512 } 00:14:43.512 ] 00:14:43.512 } 00:14:43.512 [2024-12-15 02:10:07.995511] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:43.512 [2024-12-15 02:10:07.995654] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73183 ] 00:14:43.512 [2024-12-15 02:10:08.155746] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:43.773 [2024-12-15 02:10:08.277408] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:44.033 Running I/O for 5 seconds... 00:14:45.919 43456.00 IOPS, 169.75 MiB/s [2024-12-15T02:10:11.628Z] 40004.50 IOPS, 156.27 MiB/s [2024-12-15T02:10:12.571Z] 40158.67 IOPS, 156.87 MiB/s [2024-12-15T02:10:13.957Z] 41431.00 IOPS, 161.84 MiB/s 00:14:49.192 Latency(us) 00:14:49.192 [2024-12-15T02:10:13.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:49.192 Job: xnvme_bdev (Core Mask 0x1, workload: randread, depth: 64, IO size: 4096) 00:14:49.192 xnvme_bdev : 5.00 42511.17 166.06 0.00 0.00 1501.74 831.80 6301.54 00:14:49.192 [2024-12-15T02:10:13.957Z] =================================================================================================================== 00:14:49.192 [2024-12-15T02:10:13.957Z] Total : 42511.17 166.06 0.00 0.00 1501.74 831.80 6301.54 00:14:49.763 02:10:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:49.763 02:10:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:49.763 02:10:14 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:49.763 02:10:14 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:49.763 02:10:14 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w randwrite -t 5 -T xnvme_bdev -o 4096 00:14:49.763 { 00:14:49.763 "subsystems": [ 00:14:49.763 { 00:14:49.763 "subsystem": "bdev", 00:14:49.763 "config": [ 00:14:49.763 { 00:14:49.763 "params": { 00:14:49.763 "io_mechanism": "io_uring_cmd", 00:14:49.763 "conserve_cpu": true, 00:14:49.763 "filename": "/dev/ng0n1", 00:14:49.763 "name": "xnvme_bdev" 00:14:49.763 }, 00:14:49.763 "method": "bdev_xnvme_create" 00:14:49.763 }, 00:14:49.763 { 00:14:49.763 "method": "bdev_wait_for_examine" 00:14:49.763 } 00:14:49.763 ] 00:14:49.763 } 00:14:49.763 ] 00:14:49.763 } 00:14:49.763 [2024-12-15 02:10:14.433715] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:49.763 [2024-12-15 02:10:14.433855] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73263 ] 00:14:50.024 [2024-12-15 02:10:14.597794] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:50.024 [2024-12-15 02:10:14.715584] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:50.286 Running I/O for 5 seconds... 00:14:52.246 38038.00 IOPS, 148.59 MiB/s [2024-12-15T02:10:18.394Z] 38285.00 IOPS, 149.55 MiB/s [2024-12-15T02:10:19.357Z] 38220.00 IOPS, 149.30 MiB/s [2024-12-15T02:10:20.349Z] 38375.25 IOPS, 149.90 MiB/s [2024-12-15T02:10:20.349Z] 38408.40 IOPS, 150.03 MiB/s 00:14:55.584 Latency(us) 00:14:55.584 [2024-12-15T02:10:20.349Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:14:55.584 Job: xnvme_bdev (Core Mask 0x1, workload: randwrite, depth: 64, IO size: 4096) 00:14:55.584 xnvme_bdev : 5.00 38394.31 149.98 0.00 0.00 1662.27 538.78 7612.26 00:14:55.584 [2024-12-15T02:10:20.349Z] =================================================================================================================== 00:14:55.584 [2024-12-15T02:10:20.349Z] Total : 38394.31 149.98 0.00 0.00 1662.27 538.78 7612.26 00:14:56.157 02:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:14:56.157 02:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w unmap -t 5 -T xnvme_bdev -o 4096 00:14:56.157 02:10:20 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:14:56.157 02:10:20 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:14:56.157 02:10:20 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:14:56.157 { 00:14:56.157 "subsystems": [ 00:14:56.157 { 00:14:56.157 "subsystem": "bdev", 00:14:56.157 "config": [ 00:14:56.157 { 00:14:56.157 "params": { 00:14:56.157 "io_mechanism": "io_uring_cmd", 00:14:56.157 "conserve_cpu": true, 00:14:56.157 "filename": "/dev/ng0n1", 00:14:56.157 "name": "xnvme_bdev" 00:14:56.157 }, 00:14:56.157 "method": "bdev_xnvme_create" 00:14:56.157 }, 00:14:56.157 { 00:14:56.157 "method": "bdev_wait_for_examine" 00:14:56.157 } 00:14:56.157 ] 00:14:56.157 } 00:14:56.157 ] 00:14:56.157 } 00:14:56.157 [2024-12-15 02:10:20.881821] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:14:56.157 [2024-12-15 02:10:20.881963] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73337 ] 00:14:56.418 [2024-12-15 02:10:21.046324] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:14:56.418 [2024-12-15 02:10:21.163453] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:14:56.990 Running I/O for 5 seconds... 00:14:58.880 79232.00 IOPS, 309.50 MiB/s [2024-12-15T02:10:24.589Z] 78880.00 IOPS, 308.12 MiB/s [2024-12-15T02:10:25.527Z] 78997.33 IOPS, 308.58 MiB/s [2024-12-15T02:10:26.461Z] 80416.00 IOPS, 314.12 MiB/s [2024-12-15T02:10:26.461Z] 83481.60 IOPS, 326.10 MiB/s 00:15:01.696 Latency(us) 00:15:01.696 [2024-12-15T02:10:26.461Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:01.696 Job: xnvme_bdev (Core Mask 0x1, workload: unmap, depth: 64, IO size: 4096) 00:15:01.696 xnvme_bdev : 5.00 83445.42 325.96 0.00 0.00 763.55 397.00 2898.71 00:15:01.696 [2024-12-15T02:10:26.461Z] =================================================================================================================== 00:15:01.696 [2024-12-15T02:10:26.461Z] Total : 83445.42 325.96 0.00 0.00 763.55 397.00 2898.71 00:15:02.265 02:10:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@15 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:02.265 02:10:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # gen_conf 00:15:02.265 02:10:27 nvme_xnvme.xnvme_bdevperf -- xnvme/xnvme.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /dev/fd/62 -q 64 -w write_zeroes -t 5 -T xnvme_bdev -o 4096 00:15:02.265 02:10:27 nvme_xnvme.xnvme_bdevperf -- dd/common.sh@31 -- # xtrace_disable 00:15:02.265 02:10:27 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:02.527 { 00:15:02.527 "subsystems": [ 00:15:02.527 { 00:15:02.527 "subsystem": "bdev", 00:15:02.527 "config": [ 00:15:02.527 { 00:15:02.527 "params": { 00:15:02.527 "io_mechanism": "io_uring_cmd", 00:15:02.527 "conserve_cpu": true, 00:15:02.527 "filename": "/dev/ng0n1", 00:15:02.527 "name": "xnvme_bdev" 00:15:02.527 }, 00:15:02.527 "method": "bdev_xnvme_create" 00:15:02.527 }, 00:15:02.527 { 00:15:02.527 "method": "bdev_wait_for_examine" 00:15:02.527 } 00:15:02.527 ] 00:15:02.527 } 00:15:02.527 ] 00:15:02.527 } 00:15:02.527 [2024-12-15 02:10:27.084127] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:02.527 [2024-12-15 02:10:27.084279] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73407 ] 00:15:02.527 [2024-12-15 02:10:27.247888] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:02.788 [2024-12-15 02:10:27.368885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:03.048 Running I/O for 5 seconds... 00:15:04.926 44825.00 IOPS, 175.10 MiB/s [2024-12-15T02:10:31.072Z] 44257.00 IOPS, 172.88 MiB/s [2024-12-15T02:10:32.015Z] 42631.33 IOPS, 166.53 MiB/s [2024-12-15T02:10:32.956Z] 41682.25 IOPS, 162.82 MiB/s [2024-12-15T02:10:32.957Z] 40846.80 IOPS, 159.56 MiB/s 00:15:08.192 Latency(us) 00:15:08.192 [2024-12-15T02:10:32.957Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:15:08.192 Job: xnvme_bdev (Core Mask 0x1, workload: write_zeroes, depth: 64, IO size: 4096) 00:15:08.192 xnvme_bdev : 5.00 40827.97 159.48 0.00 0.00 1562.37 82.31 20769.87 00:15:08.192 [2024-12-15T02:10:32.957Z] =================================================================================================================== 00:15:08.192 [2024-12-15T02:10:32.957Z] Total : 40827.97 159.48 0.00 0.00 1562.37 82.31 20769.87 00:15:08.765 00:15:08.765 real 0m25.527s 00:15:08.765 user 0m16.749s 00:15:08.765 sys 0m6.662s 00:15:08.765 02:10:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:08.765 ************************************ 00:15:08.765 02:10:33 nvme_xnvme.xnvme_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:15:08.765 END TEST xnvme_bdevperf 00:15:08.765 ************************************ 00:15:08.765 02:10:33 nvme_xnvme -- xnvme/xnvme.sh@88 -- # run_test xnvme_fio_plugin xnvme_fio_plugin 00:15:08.765 02:10:33 nvme_xnvme -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:15:08.765 02:10:33 nvme_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:08.765 02:10:33 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:08.765 ************************************ 00:15:08.765 START TEST xnvme_fio_plugin 00:15:08.765 ************************************ 00:15:08.765 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1129 -- # xnvme_fio_plugin 00:15:08.765 02:10:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@27 -- # local io_pattern 00:15:08.765 02:10:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@28 -- # local -n io_pattern_ref=io_uring_cmd_fio 00:15:08.765 02:10:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:08.765 02:10:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.765 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:08.765 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:09.026 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:09.026 02:10:33 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:09.026 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:09.026 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.026 02:10:33 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:09.027 02:10:33 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randread --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:09.027 { 00:15:09.027 "subsystems": [ 00:15:09.027 { 00:15:09.027 "subsystem": "bdev", 00:15:09.027 "config": [ 00:15:09.027 { 00:15:09.027 "params": { 00:15:09.027 "io_mechanism": "io_uring_cmd", 00:15:09.027 "conserve_cpu": true, 00:15:09.027 "filename": "/dev/ng0n1", 00:15:09.027 "name": "xnvme_bdev" 00:15:09.027 }, 00:15:09.027 "method": "bdev_xnvme_create" 00:15:09.027 }, 00:15:09.027 { 00:15:09.027 "method": "bdev_wait_for_examine" 00:15:09.027 } 00:15:09.027 ] 00:15:09.027 } 00:15:09.027 ] 00:15:09.027 } 00:15:09.027 xnvme_bdev: (g=0): rw=randread, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:09.027 fio-3.35 00:15:09.027 Starting 1 thread 00:15:15.657 00:15:15.657 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73525: Sun Dec 15 02:10:39 2024 00:15:15.657 read: IOPS=36.9k, BW=144MiB/s (151MB/s)(721MiB/5001msec) 00:15:15.657 slat (nsec): min=2887, max=99013, avg=3895.18, stdev=2132.14 00:15:15.657 clat (usec): min=928, max=5171, avg=1575.97, stdev=245.37 00:15:15.657 lat (usec): min=931, max=5203, avg=1579.86, stdev=245.89 00:15:15.657 clat percentiles (usec): 00:15:15.657 | 1.00th=[ 1156], 5.00th=[ 1254], 10.00th=[ 1319], 20.00th=[ 1385], 00:15:15.657 | 30.00th=[ 1434], 40.00th=[ 1483], 50.00th=[ 1532], 60.00th=[ 1598], 00:15:15.657 | 70.00th=[ 1663], 80.00th=[ 1745], 90.00th=[ 1893], 95.00th=[ 2008], 00:15:15.657 | 99.00th=[ 2278], 99.50th=[ 2442], 99.90th=[ 2835], 99.95th=[ 3916], 00:15:15.657 | 99.99th=[ 5014] 00:15:15.657 bw ( KiB/s): min=137728, max=151552, per=99.68%, avg=147114.67, stdev=4529.10, samples=9 00:15:15.657 iops : min=34432, max=37888, avg=36778.67, stdev=1132.28, samples=9 00:15:15.657 lat (usec) : 1000=0.02% 00:15:15.657 lat (msec) : 2=94.86%, 4=5.08%, 10=0.05% 00:15:15.658 cpu : usr=47.08%, sys=49.40%, ctx=12, majf=0, minf=762 00:15:15.658 IO depths : 1=1.6%, 2=3.1%, 4=6.2%, 8=12.5%, 16=25.0%, 32=50.0%, >=64=1.6% 00:15:15.658 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:15.658 complete : 0=0.0%, 4=98.5%, 8=0.0%, 16=0.0%, 32=0.0%, 64=1.5%, >=64=0.0% 00:15:15.658 issued rwts: total=184512,0,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:15.658 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:15.658 00:15:15.658 Run status group 0 (all jobs): 00:15:15.658 READ: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=721MiB (756MB), run=5001-5001msec 00:15:15.658 ----------------------------------------------------- 00:15:15.658 Suppressions used: 00:15:15.658 count bytes template 00:15:15.658 1 11 /usr/src/fio/parse.c 00:15:15.658 1 8 libtcmalloc_minimal.so 00:15:15.658 1 904 libcrypto.so 00:15:15.658 ----------------------------------------------------- 00:15:15.658 00:15:15.918 02:10:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@30 -- # for io_pattern in "${io_pattern_ref[@]}" 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # fio_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1345 -- # shift 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- xnvme/xnvme.sh@32 -- # gen_conf 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- dd/common.sh@31 -- # xtrace_disable 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # grep libasan 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1351 -- # break 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:15.919 02:10:40 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --spdk_json_conf=/dev/fd/62 --filename=xnvme_bdev --direct=1 --bs=4k --iodepth=64 --numjobs=1 --rw=randwrite --time_based --runtime=5 --thread=1 --name xnvme_bdev 00:15:15.919 { 00:15:15.919 "subsystems": [ 00:15:15.919 { 00:15:15.919 "subsystem": "bdev", 00:15:15.919 "config": [ 00:15:15.919 { 00:15:15.919 "params": { 00:15:15.919 "io_mechanism": "io_uring_cmd", 00:15:15.919 "conserve_cpu": true, 00:15:15.919 "filename": "/dev/ng0n1", 00:15:15.919 "name": "xnvme_bdev" 00:15:15.919 }, 00:15:15.919 "method": "bdev_xnvme_create" 00:15:15.919 }, 00:15:15.919 { 00:15:15.919 "method": "bdev_wait_for_examine" 00:15:15.919 } 00:15:15.919 ] 00:15:15.919 } 00:15:15.919 ] 00:15:15.919 } 00:15:15.919 xnvme_bdev: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=64 00:15:15.919 fio-3.35 00:15:15.919 Starting 1 thread 00:15:22.505 00:15:22.505 xnvme_bdev: (groupid=0, jobs=1): err= 0: pid=73616: Sun Dec 15 02:10:46 2024 00:15:22.505 write: IOPS=37.0k, BW=144MiB/s (151MB/s)(722MiB/5001msec); 0 zone resets 00:15:22.505 slat (usec): min=2, max=114, avg= 4.02, stdev= 2.21 00:15:22.505 clat (usec): min=58, max=25700, avg=1575.68, stdev=1419.56 00:15:22.505 lat (usec): min=62, max=25703, avg=1579.70, stdev=1419.67 00:15:22.505 clat percentiles (usec): 00:15:22.505 | 1.00th=[ 914], 5.00th=[ 1074], 10.00th=[ 1139], 20.00th=[ 1221], 00:15:22.505 | 30.00th=[ 1287], 40.00th=[ 1369], 50.00th=[ 1434], 60.00th=[ 1500], 00:15:22.505 | 70.00th=[ 1582], 80.00th=[ 1680], 90.00th=[ 1811], 95.00th=[ 1975], 00:15:22.505 | 99.00th=[ 2900], 99.50th=[15139], 99.90th=[20841], 99.95th=[22152], 00:15:22.505 | 99.99th=[24249] 00:15:22.505 bw ( KiB/s): min=56288, max=176800, per=98.74%, avg=145944.00, stdev=35294.72, samples=9 00:15:22.505 iops : min=14072, max=44200, avg=36486.00, stdev=8823.68, samples=9 00:15:22.505 lat (usec) : 100=0.01%, 250=0.10%, 500=0.31%, 750=0.36%, 1000=0.84% 00:15:22.506 lat (msec) : 2=93.78%, 4=3.75%, 10=0.10%, 20=0.61%, 50=0.15% 00:15:22.506 cpu : usr=61.24%, sys=33.90%, ctx=13, majf=0, minf=763 00:15:22.506 IO depths : 1=1.5%, 2=3.0%, 4=6.0%, 8=12.2%, 16=24.5%, 32=50.9%, >=64=2.0% 00:15:22.506 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:22.506 complete : 0=0.0%, 4=98.4%, 8=0.1%, 16=0.1%, 32=0.1%, 64=1.5%, >=64=0.0% 00:15:22.506 issued rwts: total=0,184792,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:22.506 latency : target=0, window=0, percentile=100.00%, depth=64 00:15:22.506 00:15:22.506 Run status group 0 (all jobs): 00:15:22.506 WRITE: bw=144MiB/s (151MB/s), 144MiB/s-144MiB/s (151MB/s-151MB/s), io=722MiB (757MB), run=5001-5001msec 00:15:22.766 ----------------------------------------------------- 00:15:22.766 Suppressions used: 00:15:22.767 count bytes template 00:15:22.767 1 11 /usr/src/fio/parse.c 00:15:22.767 1 8 libtcmalloc_minimal.so 00:15:22.767 1 904 libcrypto.so 00:15:22.767 ----------------------------------------------------- 00:15:22.767 00:15:22.767 ************************************ 00:15:22.767 END TEST xnvme_fio_plugin 00:15:22.767 ************************************ 00:15:22.767 00:15:22.767 real 0m13.831s 00:15:22.767 user 0m8.266s 00:15:22.767 sys 0m4.820s 00:15:22.767 02:10:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.767 02:10:47 nvme_xnvme.xnvme_fio_plugin -- common/autotest_common.sh@10 -- # set +x 00:15:22.767 Process with pid 73115 is not found 00:15:22.767 02:10:47 nvme_xnvme -- xnvme/xnvme.sh@1 -- # killprocess 73115 00:15:22.767 02:10:47 nvme_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73115 ']' 00:15:22.767 02:10:47 nvme_xnvme -- common/autotest_common.sh@958 -- # kill -0 73115 00:15:22.767 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (73115) - No such process 00:15:22.767 02:10:47 nvme_xnvme -- common/autotest_common.sh@981 -- # echo 'Process with pid 73115 is not found' 00:15:22.767 00:15:22.767 real 3m31.516s 00:15:22.767 user 1m56.890s 00:15:22.767 sys 1m19.544s 00:15:22.767 ************************************ 00:15:22.767 END TEST nvme_xnvme 00:15:22.767 ************************************ 00:15:22.767 02:10:47 nvme_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:22.767 02:10:47 nvme_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:22.767 02:10:47 -- spdk/autotest.sh@245 -- # run_test blockdev_xnvme /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:22.767 02:10:47 -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:22.767 02:10:47 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:22.767 02:10:47 -- common/autotest_common.sh@10 -- # set +x 00:15:22.767 ************************************ 00:15:22.767 START TEST blockdev_xnvme 00:15:22.767 ************************************ 00:15:22.767 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/bdev/blockdev.sh xnvme 00:15:23.028 * Looking for test storage... 00:15:23.028 * Found test storage at /home/vagrant/spdk_repo/spdk/test/bdev 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lcov --version 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@333 -- # local ver1 ver1_l 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@334 -- # local ver2 ver2_l 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@336 -- # IFS=.-: 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@336 -- # read -ra ver1 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@337 -- # IFS=.-: 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@337 -- # read -ra ver2 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@338 -- # local 'op=<' 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@340 -- # ver1_l=2 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@341 -- # ver2_l=1 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@344 -- # case "$op" in 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@345 -- # : 1 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v = 0 )) 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@365 -- # decimal 1 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=1 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 1 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@365 -- # ver1[v]=1 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@366 -- # decimal 2 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@353 -- # local d=2 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@355 -- # echo 2 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@366 -- # ver2[v]=2 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:15:23.028 02:10:47 blockdev_xnvme -- scripts/common.sh@368 -- # return 0 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:15:23.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.028 --rc genhtml_branch_coverage=1 00:15:23.028 --rc genhtml_function_coverage=1 00:15:23.028 --rc genhtml_legend=1 00:15:23.028 --rc geninfo_all_blocks=1 00:15:23.028 --rc geninfo_unexecuted_blocks=1 00:15:23.028 00:15:23.028 ' 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:15:23.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.028 --rc genhtml_branch_coverage=1 00:15:23.028 --rc genhtml_function_coverage=1 00:15:23.028 --rc genhtml_legend=1 00:15:23.028 --rc geninfo_all_blocks=1 00:15:23.028 --rc geninfo_unexecuted_blocks=1 00:15:23.028 00:15:23.028 ' 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:15:23.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.028 --rc genhtml_branch_coverage=1 00:15:23.028 --rc genhtml_function_coverage=1 00:15:23.028 --rc genhtml_legend=1 00:15:23.028 --rc geninfo_all_blocks=1 00:15:23.028 --rc geninfo_unexecuted_blocks=1 00:15:23.028 00:15:23.028 ' 00:15:23.028 02:10:47 blockdev_xnvme -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:15:23.028 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:15:23.028 --rc genhtml_branch_coverage=1 00:15:23.028 --rc genhtml_function_coverage=1 00:15:23.028 --rc genhtml_legend=1 00:15:23.028 --rc geninfo_all_blocks=1 00:15:23.028 --rc geninfo_unexecuted_blocks=1 00:15:23.028 00:15:23.028 ' 00:15:23.028 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/bdev/nbd_common.sh 00:15:23.028 02:10:47 blockdev_xnvme -- bdev/nbd_common.sh@6 -- # set -e 00:15:23.028 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@12 -- # rpc_py=rpc_cmd 00:15:23.028 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@13 -- # conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:23.028 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@14 -- # nonenclosed_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json 00:15:23.028 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@15 -- # nonarray_conf_file=/home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json 00:15:23.028 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # export RPC_PIPE_TIMEOUT=30 00:15:23.028 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@17 -- # RPC_PIPE_TIMEOUT=30 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@20 -- # : 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@707 -- # QOS_DEV_1=Malloc_0 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@708 -- # QOS_DEV_2=Null_1 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@709 -- # QOS_RUN_TIME=5 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@711 -- # uname -s 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@711 -- # '[' Linux = Linux ']' 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@713 -- # PRE_RESERVED_MEM=0 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@719 -- # test_type=xnvme 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@720 -- # crypto_device= 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@721 -- # dek= 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@722 -- # env_ctx= 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@723 -- # wait_for_rpc= 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@724 -- # '[' -n '' ']' 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == bdev ]] 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@727 -- # [[ xnvme == crypto_* ]] 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@730 -- # start_spdk_tgt 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@47 -- # spdk_tgt_pid=73750 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@48 -- # trap 'killprocess "$spdk_tgt_pid"; exit 1' SIGINT SIGTERM EXIT 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@49 -- # waitforlisten 73750 00:15:23.029 02:10:47 blockdev_xnvme -- common/autotest_common.sh@835 -- # '[' -z 73750 ']' 00:15:23.029 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:23.029 02:10:47 blockdev_xnvme -- bdev/blockdev.sh@46 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '' '' 00:15:23.029 02:10:47 blockdev_xnvme -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:23.029 02:10:47 blockdev_xnvme -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:23.029 02:10:47 blockdev_xnvme -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:23.029 02:10:47 blockdev_xnvme -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:23.029 02:10:47 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:23.029 [2024-12-15 02:10:47.734261] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:23.029 [2024-12-15 02:10:47.734632] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid73750 ] 00:15:23.289 [2024-12-15 02:10:47.898275] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:23.289 [2024-12-15 02:10:48.018941] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:24.290 02:10:48 blockdev_xnvme -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:24.290 02:10:48 blockdev_xnvme -- common/autotest_common.sh@868 -- # return 0 00:15:24.290 02:10:48 blockdev_xnvme -- bdev/blockdev.sh@731 -- # case "$test_type" in 00:15:24.290 02:10:48 blockdev_xnvme -- bdev/blockdev.sh@766 -- # setup_xnvme_conf 00:15:24.290 02:10:48 blockdev_xnvme -- bdev/blockdev.sh@88 -- # local io_mechanism=io_uring 00:15:24.290 02:10:48 blockdev_xnvme -- bdev/blockdev.sh@89 -- # local nvme nvmes 00:15:24.290 02:10:48 blockdev_xnvme -- bdev/blockdev.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:15:24.552 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:15:25.125 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:15:25.125 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:15:25.125 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:15:25.125 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@92 -- # get_zoned_devs 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1657 -- # zoned_devs=() 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1657 -- # local -gA zoned_devs 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # zoned_ctrls=() 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1658 -- # local -A zoned_ctrls 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1659 -- # local nvme bdf ns 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:12.0 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n1 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n1 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n1/queue/zoned ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n2 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n2 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n2/queue/zoned ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme0n3 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme0n3 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme0n3/queue/zoned ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:10.0 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme1n1 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme1n1 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme1n1/queue/zoned ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:13.0 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme2c2n1 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme2c2n1 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme2c2n1/queue/zoned ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1668 -- # for nvme in /sys/class/nvme/nvme* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1669 -- # bdf=0000:00:11.0 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1670 -- # for ns in "$nvme/"nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1671 -- # is_block_zoned nvme3n1 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1650 -- # local device=nvme3n1 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1652 -- # [[ -e /sys/block/nvme3n1/queue/zoned ]] 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@1653 -- # [[ none != none ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n1 ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n2 ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme0n3 ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme1n1 ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme2n1 ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@94 -- # for nvme in /dev/nvme*n* 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -b /dev/nvme3n1 ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@95 -- # [[ -z '' ]] 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@96 -- # nvmes+=("bdev_xnvme_create $nvme ${nvme##*/} $io_mechanism -c") 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@99 -- # (( 6 > 0 )) 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # rpc_cmd 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.125 02:10:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.125 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@100 -- # printf '%s\n' 'bdev_xnvme_create /dev/nvme0n1 nvme0n1 io_uring -c' 'bdev_xnvme_create /dev/nvme0n2 nvme0n2 io_uring -c' 'bdev_xnvme_create /dev/nvme0n3 nvme0n3 io_uring -c' 'bdev_xnvme_create /dev/nvme1n1 nvme1n1 io_uring -c' 'bdev_xnvme_create /dev/nvme2n1 nvme2n1 io_uring -c' 'bdev_xnvme_create /dev/nvme3n1 nvme3n1 io_uring -c' 00:15:25.125 nvme0n1 00:15:25.125 nvme0n2 00:15:25.125 nvme0n3 00:15:25.125 nvme1n1 00:15:25.387 nvme2n1 00:15:25.387 nvme3n1 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.387 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@774 -- # rpc_cmd bdev_wait_for_examine 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.387 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # cat 00:15:25.387 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n accel 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.387 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n bdev 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.387 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@777 -- # rpc_cmd save_subsystem_config -n iobuf 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.387 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # mapfile -t bdevs 00:15:25.387 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # rpc_cmd bdev_get_bdevs 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@563 -- # xtrace_disable 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:25.387 02:10:49 blockdev_xnvme -- bdev/blockdev.sh@785 -- # jq -r '.[] | select(.claimed == false)' 00:15:25.387 02:10:49 blockdev_xnvme -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:15:25.387 02:10:50 blockdev_xnvme -- bdev/blockdev.sh@786 -- # mapfile -t bdevs_name 00:15:25.387 02:10:50 blockdev_xnvme -- bdev/blockdev.sh@786 -- # jq -r .name 00:15:25.388 02:10:50 blockdev_xnvme -- bdev/blockdev.sh@786 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c0874012-d8a8-4bf6-87d6-9071d382b180"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c0874012-d8a8-4bf6-87d6-9071d382b180",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "86cad771-a679-43e1-8b1a-679e78ec2911"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "86cad771-a679-43e1-8b1a-679e78ec2911",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "d60e7b67-de8c-46e9-9574-7899f1719bf6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d60e7b67-de8c-46e9-9574-7899f1719bf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0ac21dff-70e6-48b7-8c94-6e5ff34d5c1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0ac21dff-70e6-48b7-8c94-6e5ff34d5c1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "cb9abdfb-86a6-4f21-893c-87f6f644c936"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cb9abdfb-86a6-4f21-893c-87f6f644c936",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "67ca49b3-41ab-4679-87c8-8b9d96b851f0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "67ca49b3-41ab-4679-87c8-8b9d96b851f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:25.388 02:10:50 blockdev_xnvme -- bdev/blockdev.sh@787 -- # bdev_list=("${bdevs_name[@]}") 00:15:25.388 02:10:50 blockdev_xnvme -- bdev/blockdev.sh@789 -- # hello_world_bdev=nvme0n1 00:15:25.388 02:10:50 blockdev_xnvme -- bdev/blockdev.sh@790 -- # trap - SIGINT SIGTERM EXIT 00:15:25.388 02:10:50 blockdev_xnvme -- bdev/blockdev.sh@791 -- # killprocess 73750 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@954 -- # '[' -z 73750 ']' 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@958 -- # kill -0 73750 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@959 -- # uname 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 73750 00:15:25.388 killing process with pid 73750 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@972 -- # echo 'killing process with pid 73750' 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@973 -- # kill 73750 00:15:25.388 02:10:50 blockdev_xnvme -- common/autotest_common.sh@978 -- # wait 73750 00:15:27.305 02:10:51 blockdev_xnvme -- bdev/blockdev.sh@795 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:27.305 02:10:51 blockdev_xnvme -- bdev/blockdev.sh@797 -- # run_test bdev_hello_world /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:27.305 02:10:51 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 7 -le 1 ']' 00:15:27.305 02:10:51 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:27.305 02:10:51 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:27.305 ************************************ 00:15:27.305 START TEST bdev_hello_world 00:15:27.305 ************************************ 00:15:27.305 02:10:51 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/hello_bdev --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -b nvme0n1 '' 00:15:27.305 [2024-12-15 02:10:51.828725] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:27.305 [2024-12-15 02:10:51.828884] [ DPDK EAL parameters: hello_bdev --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74034 ] 00:15:27.305 [2024-12-15 02:10:51.998892] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:27.566 [2024-12-15 02:10:52.115964] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:27.829 [2024-12-15 02:10:52.518062] hello_bdev.c: 222:hello_start: *NOTICE*: Successfully started the application 00:15:27.829 [2024-12-15 02:10:52.518368] hello_bdev.c: 231:hello_start: *NOTICE*: Opening the bdev nvme0n1 00:15:27.829 [2024-12-15 02:10:52.518397] hello_bdev.c: 244:hello_start: *NOTICE*: Opening io channel 00:15:27.829 [2024-12-15 02:10:52.520528] hello_bdev.c: 138:hello_write: *NOTICE*: Writing to the bdev 00:15:27.829 [2024-12-15 02:10:52.521175] hello_bdev.c: 117:write_complete: *NOTICE*: bdev io write completed successfully 00:15:27.829 [2024-12-15 02:10:52.521219] hello_bdev.c: 84:hello_read: *NOTICE*: Reading io 00:15:27.829 [2024-12-15 02:10:52.522368] hello_bdev.c: 65:read_complete: *NOTICE*: Read string from bdev : Hello World! 00:15:27.829 00:15:27.829 [2024-12-15 02:10:52.522428] hello_bdev.c: 74:read_complete: *NOTICE*: Stopping app 00:15:28.774 00:15:28.774 real 0m1.561s 00:15:28.774 user 0m1.167s 00:15:28.774 sys 0m0.246s 00:15:28.774 02:10:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:28.774 02:10:53 blockdev_xnvme.bdev_hello_world -- common/autotest_common.sh@10 -- # set +x 00:15:28.774 ************************************ 00:15:28.774 END TEST bdev_hello_world 00:15:28.774 ************************************ 00:15:28.774 02:10:53 blockdev_xnvme -- bdev/blockdev.sh@798 -- # run_test bdev_bounds bdev_bounds '' 00:15:28.774 02:10:53 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:28.774 02:10:53 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:28.774 02:10:53 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:28.774 ************************************ 00:15:28.774 START TEST bdev_bounds 00:15:28.774 ************************************ 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1129 -- # bdev_bounds '' 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@289 -- # bdevio_pid=74071 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@290 -- # trap 'cleanup; killprocess $bdevio_pid; exit 1' SIGINT SIGTERM EXIT 00:15:28.774 Process bdevio pid: 74071 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@291 -- # echo 'Process bdevio pid: 74071' 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@288 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/bdevio -w -s 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@292 -- # waitforlisten 74071 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@835 -- # '[' -z 74071 ']' 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:28.774 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:28.774 02:10:53 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:28.774 [2024-12-15 02:10:53.453873] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:28.774 [2024-12-15 02:10:53.454028] [ DPDK EAL parameters: bdevio --no-shconf -c 0x7 -m 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74071 ] 00:15:29.035 [2024-12-15 02:10:53.611738] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:15:29.035 [2024-12-15 02:10:53.729788] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:29.035 [2024-12-15 02:10:53.730134] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:15:29.035 [2024-12-15 02:10:53.730254] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:29.605 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:29.605 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@868 -- # return 0 00:15:29.605 02:10:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@293 -- # /home/vagrant/spdk_repo/spdk/test/bdev/bdevio/tests.py perform_tests 00:15:29.866 I/O targets: 00:15:29.866 nvme0n1: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:29.866 nvme0n2: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:29.866 nvme0n3: 1048576 blocks of 4096 bytes (4096 MiB) 00:15:29.866 nvme1n1: 1548666 blocks of 4096 bytes (6050 MiB) 00:15:29.866 nvme2n1: 262144 blocks of 4096 bytes (1024 MiB) 00:15:29.866 nvme3n1: 1310720 blocks of 4096 bytes (5120 MiB) 00:15:29.866 00:15:29.866 00:15:29.866 CUnit - A unit testing framework for C - Version 2.1-3 00:15:29.866 http://cunit.sourceforge.net/ 00:15:29.866 00:15:29.866 00:15:29.866 Suite: bdevio tests on: nvme3n1 00:15:29.866 Test: blockdev write read block ...passed 00:15:29.866 Test: blockdev write zeroes read block ...passed 00:15:29.866 Test: blockdev write zeroes read no split ...passed 00:15:29.866 Test: blockdev write zeroes read split ...passed 00:15:29.866 Test: blockdev write zeroes read split partial ...passed 00:15:29.866 Test: blockdev reset ...passed 00:15:29.866 Test: blockdev write read 8 blocks ...passed 00:15:29.866 Test: blockdev write read size > 128k ...passed 00:15:29.866 Test: blockdev write read invalid size ...passed 00:15:29.866 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:29.866 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:29.866 Test: blockdev write read max offset ...passed 00:15:29.866 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:29.866 Test: blockdev writev readv 8 blocks ...passed 00:15:29.866 Test: blockdev writev readv 30 x 1block ...passed 00:15:29.866 Test: blockdev writev readv block ...passed 00:15:29.866 Test: blockdev writev readv size > 128k ...passed 00:15:29.866 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:29.866 Test: blockdev comparev and writev ...passed 00:15:29.866 Test: blockdev nvme passthru rw ...passed 00:15:29.866 Test: blockdev nvme passthru vendor specific ...passed 00:15:29.866 Test: blockdev nvme admin passthru ...passed 00:15:29.866 Test: blockdev copy ...passed 00:15:29.866 Suite: bdevio tests on: nvme2n1 00:15:29.866 Test: blockdev write read block ...passed 00:15:29.866 Test: blockdev write zeroes read block ...passed 00:15:29.866 Test: blockdev write zeroes read no split ...passed 00:15:29.866 Test: blockdev write zeroes read split ...passed 00:15:29.866 Test: blockdev write zeroes read split partial ...passed 00:15:29.866 Test: blockdev reset ...passed 00:15:29.866 Test: blockdev write read 8 blocks ...passed 00:15:29.866 Test: blockdev write read size > 128k ...passed 00:15:29.866 Test: blockdev write read invalid size ...passed 00:15:29.866 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:29.866 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:29.866 Test: blockdev write read max offset ...passed 00:15:29.866 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:29.866 Test: blockdev writev readv 8 blocks ...passed 00:15:29.866 Test: blockdev writev readv 30 x 1block ...passed 00:15:29.866 Test: blockdev writev readv block ...passed 00:15:29.866 Test: blockdev writev readv size > 128k ...passed 00:15:29.866 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:29.866 Test: blockdev comparev and writev ...passed 00:15:29.866 Test: blockdev nvme passthru rw ...passed 00:15:29.866 Test: blockdev nvme passthru vendor specific ...passed 00:15:29.866 Test: blockdev nvme admin passthru ...passed 00:15:29.866 Test: blockdev copy ...passed 00:15:29.866 Suite: bdevio tests on: nvme1n1 00:15:29.866 Test: blockdev write read block ...passed 00:15:29.866 Test: blockdev write zeroes read block ...passed 00:15:29.866 Test: blockdev write zeroes read no split ...passed 00:15:29.866 Test: blockdev write zeroes read split ...passed 00:15:30.126 Test: blockdev write zeroes read split partial ...passed 00:15:30.126 Test: blockdev reset ...passed 00:15:30.126 Test: blockdev write read 8 blocks ...passed 00:15:30.126 Test: blockdev write read size > 128k ...passed 00:15:30.126 Test: blockdev write read invalid size ...passed 00:15:30.126 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.126 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.126 Test: blockdev write read max offset ...passed 00:15:30.126 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.126 Test: blockdev writev readv 8 blocks ...passed 00:15:30.126 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.126 Test: blockdev writev readv block ...passed 00:15:30.126 Test: blockdev writev readv size > 128k ...passed 00:15:30.126 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.127 Test: blockdev comparev and writev ...passed 00:15:30.127 Test: blockdev nvme passthru rw ...passed 00:15:30.127 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.127 Test: blockdev nvme admin passthru ...passed 00:15:30.127 Test: blockdev copy ...passed 00:15:30.127 Suite: bdevio tests on: nvme0n3 00:15:30.127 Test: blockdev write read block ...passed 00:15:30.127 Test: blockdev write zeroes read block ...passed 00:15:30.127 Test: blockdev write zeroes read no split ...passed 00:15:30.127 Test: blockdev write zeroes read split ...passed 00:15:30.127 Test: blockdev write zeroes read split partial ...passed 00:15:30.127 Test: blockdev reset ...passed 00:15:30.127 Test: blockdev write read 8 blocks ...passed 00:15:30.127 Test: blockdev write read size > 128k ...passed 00:15:30.127 Test: blockdev write read invalid size ...passed 00:15:30.127 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.127 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.127 Test: blockdev write read max offset ...passed 00:15:30.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.127 Test: blockdev writev readv 8 blocks ...passed 00:15:30.127 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.127 Test: blockdev writev readv block ...passed 00:15:30.127 Test: blockdev writev readv size > 128k ...passed 00:15:30.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.127 Test: blockdev comparev and writev ...passed 00:15:30.127 Test: blockdev nvme passthru rw ...passed 00:15:30.127 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.127 Test: blockdev nvme admin passthru ...passed 00:15:30.127 Test: blockdev copy ...passed 00:15:30.127 Suite: bdevio tests on: nvme0n2 00:15:30.127 Test: blockdev write read block ...passed 00:15:30.127 Test: blockdev write zeroes read block ...passed 00:15:30.127 Test: blockdev write zeroes read no split ...passed 00:15:30.127 Test: blockdev write zeroes read split ...passed 00:15:30.127 Test: blockdev write zeroes read split partial ...passed 00:15:30.127 Test: blockdev reset ...passed 00:15:30.127 Test: blockdev write read 8 blocks ...passed 00:15:30.127 Test: blockdev write read size > 128k ...passed 00:15:30.127 Test: blockdev write read invalid size ...passed 00:15:30.127 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.127 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.127 Test: blockdev write read max offset ...passed 00:15:30.127 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.127 Test: blockdev writev readv 8 blocks ...passed 00:15:30.127 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.127 Test: blockdev writev readv block ...passed 00:15:30.127 Test: blockdev writev readv size > 128k ...passed 00:15:30.127 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.127 Test: blockdev comparev and writev ...passed 00:15:30.127 Test: blockdev nvme passthru rw ...passed 00:15:30.127 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.127 Test: blockdev nvme admin passthru ...passed 00:15:30.127 Test: blockdev copy ...passed 00:15:30.127 Suite: bdevio tests on: nvme0n1 00:15:30.127 Test: blockdev write read block ...passed 00:15:30.127 Test: blockdev write zeroes read block ...passed 00:15:30.127 Test: blockdev write zeroes read no split ...passed 00:15:30.127 Test: blockdev write zeroes read split ...passed 00:15:30.389 Test: blockdev write zeroes read split partial ...passed 00:15:30.389 Test: blockdev reset ...passed 00:15:30.389 Test: blockdev write read 8 blocks ...passed 00:15:30.389 Test: blockdev write read size > 128k ...passed 00:15:30.389 Test: blockdev write read invalid size ...passed 00:15:30.389 Test: blockdev write read offset + nbytes == size of blockdev ...passed 00:15:30.389 Test: blockdev write read offset + nbytes > size of blockdev ...passed 00:15:30.389 Test: blockdev write read max offset ...passed 00:15:30.389 Test: blockdev write read 2 blocks on overlapped address offset ...passed 00:15:30.389 Test: blockdev writev readv 8 blocks ...passed 00:15:30.389 Test: blockdev writev readv 30 x 1block ...passed 00:15:30.389 Test: blockdev writev readv block ...passed 00:15:30.389 Test: blockdev writev readv size > 128k ...passed 00:15:30.389 Test: blockdev writev readv size > 128k in two iovs ...passed 00:15:30.389 Test: blockdev comparev and writev ...passed 00:15:30.389 Test: blockdev nvme passthru rw ...passed 00:15:30.389 Test: blockdev nvme passthru vendor specific ...passed 00:15:30.389 Test: blockdev nvme admin passthru ...passed 00:15:30.389 Test: blockdev copy ...passed 00:15:30.389 00:15:30.389 Run Summary: Type Total Ran Passed Failed Inactive 00:15:30.389 suites 6 6 n/a 0 0 00:15:30.389 tests 138 138 138 0 0 00:15:30.389 asserts 780 780 780 0 n/a 00:15:30.389 00:15:30.389 Elapsed time = 1.296 seconds 00:15:30.389 0 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@294 -- # killprocess 74071 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@954 -- # '[' -z 74071 ']' 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@958 -- # kill -0 74071 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # uname 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74071 00:15:30.389 killing process with pid 74071 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74071' 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@973 -- # kill 74071 00:15:30.389 02:10:54 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@978 -- # wait 74071 00:15:31.332 ************************************ 00:15:31.332 END TEST bdev_bounds 00:15:31.332 ************************************ 00:15:31.332 02:10:55 blockdev_xnvme.bdev_bounds -- bdev/blockdev.sh@295 -- # trap - SIGINT SIGTERM EXIT 00:15:31.332 00:15:31.332 real 0m2.411s 00:15:31.332 user 0m5.875s 00:15:31.332 sys 0m0.372s 00:15:31.332 02:10:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:31.332 02:10:55 blockdev_xnvme.bdev_bounds -- common/autotest_common.sh@10 -- # set +x 00:15:31.332 02:10:55 blockdev_xnvme -- bdev/blockdev.sh@799 -- # run_test bdev_nbd nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:31.332 02:10:55 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:15:31.332 02:10:55 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:31.332 02:10:55 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:31.332 ************************************ 00:15:31.332 START TEST bdev_nbd 00:15:31.332 ************************************ 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1129 -- # nbd_function_test /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '' 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # uname -s 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@299 -- # [[ Linux == Linux ]] 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@301 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@302 -- # local conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # bdev_all=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@303 -- # local bdev_all 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@304 -- # local bdev_num=6 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@308 -- # [[ -e /sys/module/nbd ]] 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # nbd_all=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13' '/dev/nbd14' '/dev/nbd15' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5' '/dev/nbd6' '/dev/nbd7' '/dev/nbd8' '/dev/nbd9') 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@310 -- # local nbd_all 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@311 -- # bdev_num=6 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@313 -- # local nbd_list 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@314 -- # local bdev_list 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@317 -- # nbd_pid=74130 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@318 -- # trap 'cleanup; killprocess $nbd_pid' SIGINT SIGTERM EXIT 00:15:31.332 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock... 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@319 -- # waitforlisten 74130 /var/tmp/spdk-nbd.sock 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@835 -- # '[' -z 74130 ']' 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk-nbd.sock 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@840 -- # local max_retries=100 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@316 -- # /home/vagrant/spdk_repo/spdk/test/app/bdev_svc/bdev_svc -r /var/tmp/spdk-nbd.sock -i 0 --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json '' 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk-nbd.sock...' 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@844 -- # xtrace_disable 00:15:31.332 02:10:55 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:31.332 [2024-12-15 02:10:55.938046] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:31.332 [2024-12-15 02:10:55.938215] [ DPDK EAL parameters: bdev_svc -c 0x1 --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk0 --proc-type=auto ] 00:15:31.594 [2024-12-15 02:10:56.101346] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:15:31.594 [2024-12-15 02:10:56.229416] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@868 -- # return 0 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@321 -- # nbd_rpc_start_stop_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@113 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@114 -- # local bdev_list 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@116 -- # nbd_start_disks_without_nbd_idx /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@22 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@23 -- # local bdev_list 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@24 -- # local i 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@25 -- # local nbd_device 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i = 0 )) 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.165 02:10:56 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 00:15:32.426 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd0 00:15:32.426 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd0 00:15:32.426 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd0 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.427 1+0 records in 00:15:32.427 1+0 records out 00:15:32.427 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00120549 s, 3.4 MB/s 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.427 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd1 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd1 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd1 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.688 1+0 records in 00:15:32.688 1+0 records out 00:15:32.688 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00118985 s, 3.4 MB/s 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.688 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 00:15:32.949 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd2 00:15:32.949 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd2 00:15:32.949 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd2 00:15:32.949 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd2 00:15:32.949 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd2 /proc/partitions 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd2 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:32.950 1+0 records in 00:15:32.950 1+0 records out 00:15:32.950 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00106653 s, 3.8 MB/s 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:32.950 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 00:15:33.210 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd3 00:15:33.210 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd3 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd3 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd3 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd3 /proc/partitions 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd3 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.211 1+0 records in 00:15:33.211 1+0 records out 00:15:33.211 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111299 s, 3.7 MB/s 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.211 02:10:57 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd4 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd4 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd4 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd4 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd4 /proc/partitions 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd4 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.472 1+0 records in 00:15:33.472 1+0 records out 00:15:33.472 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00119907 s, 3.4 MB/s 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.472 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@28 -- # nbd_device=/dev/nbd5 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # basename /dev/nbd5 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@30 -- # waitfornbd nbd5 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd5 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd5 /proc/partitions 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd5 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:33.734 1+0 records in 00:15:33.734 1+0 records out 00:15:33.734 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.0011607 s, 3.5 MB/s 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i++ )) 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@27 -- # (( i < 6 )) 00:15:33.734 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@118 -- # nbd_disks_json='[ 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd0", 00:15:33.996 "bdev_name": "nvme0n1" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd1", 00:15:33.996 "bdev_name": "nvme0n2" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd2", 00:15:33.996 "bdev_name": "nvme0n3" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd3", 00:15:33.996 "bdev_name": "nvme1n1" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd4", 00:15:33.996 "bdev_name": "nvme2n1" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd5", 00:15:33.996 "bdev_name": "nvme3n1" 00:15:33.996 } 00:15:33.996 ]' 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # nbd_disks_name=($(echo "${nbd_disks_json}" | jq -r '.[] | .nbd_device')) 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # echo '[ 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd0", 00:15:33.996 "bdev_name": "nvme0n1" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd1", 00:15:33.996 "bdev_name": "nvme0n2" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd2", 00:15:33.996 "bdev_name": "nvme0n3" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd3", 00:15:33.996 "bdev_name": "nvme1n1" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd4", 00:15:33.996 "bdev_name": "nvme2n1" 00:15:33.996 }, 00:15:33.996 { 00:15:33.996 "nbd_device": "/dev/nbd5", 00:15:33.996 "bdev_name": "nvme3n1" 00:15:33.996 } 00:15:33.996 ]' 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@119 -- # jq -r '.[] | .nbd_device' 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@120 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd2 /dev/nbd3 /dev/nbd4 /dev/nbd5' 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd2' '/dev/nbd3' '/dev/nbd4' '/dev/nbd5') 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:33.996 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:34.257 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:34.257 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:34.257 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:34.257 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.257 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.257 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:34.257 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.258 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.258 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.258 02:10:58 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd2 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd2 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd2 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd2 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd2 /proc/partitions 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.519 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd3 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd3 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd3 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd3 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd3 /proc/partitions 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:34.781 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd4 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd4 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd4 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd4 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd4 /proc/partitions 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:35.041 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd5 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd5 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd5 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd5 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd5 /proc/partitions 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.302 02:10:59 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@122 -- # count=0 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@123 -- # '[' 0 -ne 0 ']' 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@127 -- # return 0 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@322 -- # nbd_rpc_data_verify /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@90 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@91 -- # local bdev_list 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@92 -- # local nbd_list 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@94 -- # nbd_start_disks /var/tmp/spdk-nbd.sock 'nvme0n1 nvme0n2 nvme0n3 nvme1n1 nvme2n1 nvme3n1' '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@9 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # bdev_list=('nvme0n1' 'nvme0n2' 'nvme0n3' 'nvme1n1' 'nvme2n1' 'nvme3n1') 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@10 -- # local bdev_list 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@11 -- # local nbd_list 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@12 -- # local i 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i = 0 )) 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:35.564 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n1 /dev/nbd0 00:15:35.826 /dev/nbd0 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd0 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd0 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:35.826 1+0 records in 00:15:35.826 1+0 records out 00:15:35.826 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00094785 s, 4.3 MB/s 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:35.826 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n2 /dev/nbd1 00:15:36.088 /dev/nbd1 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd1 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd1 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd1 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd1 /proc/partitions 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd1 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.088 1+0 records in 00:15:36.088 1+0 records out 00:15:36.088 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00080121 s, 5.1 MB/s 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.088 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme0n3 /dev/nbd10 00:15:36.350 /dev/nbd10 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd10 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd10 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd10 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd10 /proc/partitions 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd10 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.350 1+0 records in 00:15:36.350 1+0 records out 00:15:36.350 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00109635 s, 3.7 MB/s 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.350 02:11:00 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme1n1 /dev/nbd11 00:15:36.611 /dev/nbd11 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd11 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd11 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd11 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd11 /proc/partitions 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd11 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.611 1+0 records in 00:15:36.611 1+0 records out 00:15:36.611 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000899122 s, 4.6 MB/s 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.611 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme2n1 /dev/nbd12 00:15:36.874 /dev/nbd12 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd12 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd12 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd12 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd12 /proc/partitions 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd12 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:36.874 1+0 records in 00:15:36.874 1+0 records out 00:15:36.874 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00111396 s, 3.7 MB/s 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:36.874 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@15 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk nvme3n1 /dev/nbd13 00:15:37.136 /dev/nbd13 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # basename /dev/nbd13 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@17 -- # waitfornbd nbd13 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@872 -- # local nbd_name=nbd13 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@873 -- # local i 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@876 -- # grep -q -w nbd13 /proc/partitions 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@877 -- # break 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@889 -- # dd if=/dev/nbd13 of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdtest bs=4096 count=1 iflag=direct 00:15:37.136 1+0 records in 00:15:37.136 1+0 records out 00:15:37.136 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.00127671 s, 3.2 MB/s 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@890 -- # size=4096 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/nbdtest 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@893 -- # return 0 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i++ )) 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@14 -- # (( i < 6 )) 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:37.136 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:37.397 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[ 00:15:37.397 { 00:15:37.397 "nbd_device": "/dev/nbd0", 00:15:37.397 "bdev_name": "nvme0n1" 00:15:37.397 }, 00:15:37.397 { 00:15:37.397 "nbd_device": "/dev/nbd1", 00:15:37.397 "bdev_name": "nvme0n2" 00:15:37.397 }, 00:15:37.397 { 00:15:37.397 "nbd_device": "/dev/nbd10", 00:15:37.397 "bdev_name": "nvme0n3" 00:15:37.397 }, 00:15:37.397 { 00:15:37.397 "nbd_device": "/dev/nbd11", 00:15:37.397 "bdev_name": "nvme1n1" 00:15:37.397 }, 00:15:37.397 { 00:15:37.397 "nbd_device": "/dev/nbd12", 00:15:37.397 "bdev_name": "nvme2n1" 00:15:37.397 }, 00:15:37.397 { 00:15:37.398 "nbd_device": "/dev/nbd13", 00:15:37.398 "bdev_name": "nvme3n1" 00:15:37.398 } 00:15:37.398 ]' 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[ 00:15:37.398 { 00:15:37.398 "nbd_device": "/dev/nbd0", 00:15:37.398 "bdev_name": "nvme0n1" 00:15:37.398 }, 00:15:37.398 { 00:15:37.398 "nbd_device": "/dev/nbd1", 00:15:37.398 "bdev_name": "nvme0n2" 00:15:37.398 }, 00:15:37.398 { 00:15:37.398 "nbd_device": "/dev/nbd10", 00:15:37.398 "bdev_name": "nvme0n3" 00:15:37.398 }, 00:15:37.398 { 00:15:37.398 "nbd_device": "/dev/nbd11", 00:15:37.398 "bdev_name": "nvme1n1" 00:15:37.398 }, 00:15:37.398 { 00:15:37.398 "nbd_device": "/dev/nbd12", 00:15:37.398 "bdev_name": "nvme2n1" 00:15:37.398 }, 00:15:37.398 { 00:15:37.398 "nbd_device": "/dev/nbd13", 00:15:37.398 "bdev_name": "nvme3n1" 00:15:37.398 } 00:15:37.398 ]' 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name='/dev/nbd0 00:15:37.398 /dev/nbd1 00:15:37.398 /dev/nbd10 00:15:37.398 /dev/nbd11 00:15:37.398 /dev/nbd12 00:15:37.398 /dev/nbd13' 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '/dev/nbd0 00:15:37.398 /dev/nbd1 00:15:37.398 /dev/nbd10 00:15:37.398 /dev/nbd11 00:15:37.398 /dev/nbd12 00:15:37.398 /dev/nbd13' 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=6 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 6 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@95 -- # count=6 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@96 -- # '[' 6 -ne 6 ']' 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@100 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' write 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=write 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' write = write ']' 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@76 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest bs=4096 count=256 00:15:37.398 256+0 records in 00:15:37.398 256+0 records out 00:15:37.398 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.00540104 s, 194 MB/s 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.398 02:11:01 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd0 bs=4096 count=256 oflag=direct 00:15:37.658 256+0 records in 00:15:37.658 256+0 records out 00:15:37.658 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240898 s, 4.4 MB/s 00:15:37.658 02:11:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.658 02:11:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd1 bs=4096 count=256 oflag=direct 00:15:37.919 256+0 records in 00:15:37.919 256+0 records out 00:15:37.919 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.240252 s, 4.4 MB/s 00:15:37.919 02:11:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:37.919 02:11:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd10 bs=4096 count=256 oflag=direct 00:15:38.181 256+0 records in 00:15:38.181 256+0 records out 00:15:38.181 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.233034 s, 4.5 MB/s 00:15:38.181 02:11:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.181 02:11:02 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd11 bs=4096 count=256 oflag=direct 00:15:38.442 256+0 records in 00:15:38.442 256+0 records out 00:15:38.442 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.311128 s, 3.4 MB/s 00:15:38.442 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.442 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd12 bs=4096 count=256 oflag=direct 00:15:38.704 256+0 records in 00:15:38.704 256+0 records out 00:15:38.704 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.199732 s, 5.2 MB/s 00:15:38.704 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@77 -- # for i in "${nbd_list[@]}" 00:15:38.704 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@78 -- # dd if=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest of=/dev/nbd13 bs=4096 count=256 oflag=direct 00:15:38.964 256+0 records in 00:15:38.964 256+0 records out 00:15:38.964 1048576 bytes (1.0 MB, 1.0 MiB) copied, 0.254321 s, 4.1 MB/s 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@101 -- # nbd_dd_data_verify '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' verify 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@70 -- # local nbd_list 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@71 -- # local operation=verify 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@72 -- # local tmp_file=/home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@74 -- # '[' verify = write ']' 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@80 -- # '[' verify = verify ']' 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd0 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd1 00:15:38.964 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd10 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd11 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd12 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@82 -- # for i in "${nbd_list[@]}" 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@83 -- # cmp -b -n 1M /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest /dev/nbd13 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@85 -- # rm /home/vagrant/spdk_repo/spdk/test/bdev/nbdrandtest 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@103 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock '/dev/nbd0 /dev/nbd1 /dev/nbd10 /dev/nbd11 /dev/nbd12 /dev/nbd13' 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0' '/dev/nbd1' '/dev/nbd10' '/dev/nbd11' '/dev/nbd12' '/dev/nbd13') 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:38.965 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd1 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd1 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd1 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd1 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd1 /proc/partitions 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.226 02:11:03 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd10 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd10 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd10 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd10 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd10 /proc/partitions 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.488 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd11 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd11 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd11 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd11 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd11 /proc/partitions 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:39.749 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd12 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd12 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd12 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd12 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd12 /proc/partitions 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:40.010 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd13 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd13 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd13 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd13 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd13 /proc/partitions 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # nbd_get_count /var/tmp/spdk-nbd.sock 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@61 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.270 02:11:04 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_get_disks 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@63 -- # nbd_disks_json='[]' 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # jq -r '.[] | .nbd_device' 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # echo '[]' 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@64 -- # nbd_disks_name= 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # echo '' 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # grep -c /dev/nbd 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # true 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@65 -- # count=0 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@66 -- # echo 0 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@104 -- # count=0 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@105 -- # '[' 0 -ne 0 ']' 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@109 -- # return 0 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@323 -- # nbd_with_lvol_verify /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@131 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@132 -- # local nbd=/dev/nbd0 00:15:40.534 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@134 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_malloc_create -b malloc_lvol_verify 16 512 00:15:40.793 malloc_lvol_verify 00:15:40.793 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@135 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create_lvstore malloc_lvol_verify lvs 00:15:41.051 636c8b54-6bb7-4fe9-8faa-73db7fe9f1ca 00:15:41.051 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@136 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock bdev_lvol_create lvol 4 -l lvs 00:15:41.051 d83ff3d3-e890-4bcd-950e-2dde5eec15be 00:15:41.051 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@137 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_start_disk lvs/lvol /dev/nbd0 00:15:41.309 /dev/nbd0 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@139 -- # wait_for_nbd_set_capacity /dev/nbd0 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@146 -- # local nbd=nbd0 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@148 -- # [[ -e /sys/block/nbd0/size ]] 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@150 -- # (( 8192 == 0 )) 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@141 -- # mkfs.ext4 /dev/nbd0 00:15:41.309 mke2fs 1.47.0 (5-Feb-2023) 00:15:41.309 Discarding device blocks: 0/4096 done 00:15:41.309 Creating filesystem with 4096 1k blocks and 1024 inodes 00:15:41.309 00:15:41.309 Allocating group tables: 0/1 done 00:15:41.309 Writing inode tables: 0/1 done 00:15:41.309 Creating journal (1024 blocks): done 00:15:41.309 Writing superblocks and filesystem accounting information: 0/1 done 00:15:41.309 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@142 -- # nbd_stop_disks /var/tmp/spdk-nbd.sock /dev/nbd0 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@49 -- # local rpc_server=/var/tmp/spdk-nbd.sock 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # nbd_list=('/dev/nbd0') 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@50 -- # local nbd_list 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@51 -- # local i 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@53 -- # for i in "${nbd_list[@]}" 00:15:41.309 02:11:05 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@54 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk-nbd.sock nbd_stop_disk /dev/nbd0 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # basename /dev/nbd0 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@55 -- # waitfornbd_exit nbd0 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@35 -- # local nbd_name=nbd0 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i = 1 )) 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@37 -- # (( i <= 20 )) 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@38 -- # grep -q -w nbd0 /proc/partitions 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@41 -- # break 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/nbd_common.sh@45 -- # return 0 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@325 -- # killprocess 74130 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@954 -- # '[' -z 74130 ']' 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@958 -- # kill -0 74130 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # uname 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 74130 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:15:41.568 killing process with pid 74130 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@972 -- # echo 'killing process with pid 74130' 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@973 -- # kill 74130 00:15:41.568 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@978 -- # wait 74130 00:15:42.137 02:11:06 blockdev_xnvme.bdev_nbd -- bdev/blockdev.sh@326 -- # trap - SIGINT SIGTERM EXIT 00:15:42.137 00:15:42.137 real 0m10.949s 00:15:42.137 user 0m14.630s 00:15:42.137 sys 0m3.888s 00:15:42.137 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:42.137 02:11:06 blockdev_xnvme.bdev_nbd -- common/autotest_common.sh@10 -- # set +x 00:15:42.137 ************************************ 00:15:42.137 END TEST bdev_nbd 00:15:42.137 ************************************ 00:15:42.137 02:11:06 blockdev_xnvme -- bdev/blockdev.sh@800 -- # [[ y == y ]] 00:15:42.137 02:11:06 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = nvme ']' 00:15:42.137 02:11:06 blockdev_xnvme -- bdev/blockdev.sh@801 -- # '[' xnvme = gpt ']' 00:15:42.137 02:11:06 blockdev_xnvme -- bdev/blockdev.sh@805 -- # run_test bdev_fio fio_test_suite '' 00:15:42.137 02:11:06 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 3 -le 1 ']' 00:15:42.137 02:11:06 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.137 02:11:06 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:42.137 ************************************ 00:15:42.137 START TEST bdev_fio 00:15:42.137 ************************************ 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1129 -- # fio_test_suite '' 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@330 -- # local env_context 00:15:42.137 /home/vagrant/spdk_repo/spdk/test/bdev /home/vagrant/spdk_repo/spdk 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@334 -- # pushd /home/vagrant/spdk_repo/spdk/test/bdev 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@335 -- # trap 'rm -f ./*.state; popd; exit 1' SIGINT SIGTERM EXIT 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # echo '' 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # sed s/--env-context=// 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@338 -- # env_context= 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@339 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio verify AIO '' 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=verify 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type=AIO 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z verify ']' 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' verify == verify ']' 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1318 -- # cat 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1327 -- # '[' AIO == AIO ']' 00:15:42.137 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # /usr/src/fio/fio --version 00:15:42.398 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1328 -- # [[ fio-3.35 == *\f\i\o\-\3* ]] 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1329 -- # echo serialize_overlap=1 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n1]' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n1 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n2]' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n2 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme0n3]' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme0n3 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme1n1]' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme1n1 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme2n1]' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme2n1 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@340 -- # for b in "${bdevs_name[@]}" 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@341 -- # echo '[job_nvme3n1]' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@342 -- # echo filename=nvme3n1 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@346 -- # local 'fio_params=--ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@348 -- # run_test bdev_fio_rw_verify fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1105 -- # '[' 11 -le 1 ']' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:42.399 ************************************ 00:15:42.399 START TEST bdev_fio_rw_verify 00:15:42.399 ************************************ 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1129 -- # fio_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1343 -- # local sanitizers 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1345 -- # shift 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1347 -- # local asan_lib= 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # grep libasan 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1351 -- # break 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:15:42.399 02:11:06 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio --ioengine=spdk_bdev --iodepth=8 --bs=4k --runtime=10 /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio --verify_state_save=0 --spdk_json_conf=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.json --spdk_mem=0 --aux-path=/home/vagrant/spdk_repo/spdk/../output 00:15:42.399 job_nvme0n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.399 job_nvme0n2: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.399 job_nvme0n3: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.399 job_nvme1n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.399 job_nvme2n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.399 job_nvme3n1: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=8 00:15:42.399 fio-3.35 00:15:42.399 Starting 6 threads 00:15:54.683 00:15:54.683 job_nvme0n1: (groupid=0, jobs=6): err= 0: pid=74543: Sun Dec 15 02:11:17 2024 00:15:54.683 read: IOPS=17.2k, BW=67.2MiB/s (70.5MB/s)(672MiB/10002msec) 00:15:54.683 slat (usec): min=2, max=2112, avg= 6.30, stdev=16.20 00:15:54.683 clat (usec): min=79, max=9361, avg=1078.54, stdev=736.04 00:15:54.683 lat (usec): min=83, max=9376, avg=1084.84, stdev=736.86 00:15:54.683 clat percentiles (usec): 00:15:54.683 | 50.000th=[ 930], 99.000th=[ 3392], 99.900th=[ 4817], 99.990th=[ 6783], 00:15:54.683 | 99.999th=[ 9372] 00:15:54.683 write: IOPS=17.4k, BW=68.2MiB/s (71.5MB/s)(682MiB/10002msec); 0 zone resets 00:15:54.683 slat (usec): min=12, max=4067, avg=40.75, stdev=137.15 00:15:54.683 clat (usec): min=65, max=8390, avg=1371.02, stdev=834.47 00:15:54.683 lat (usec): min=89, max=8407, avg=1411.77, stdev=849.31 00:15:54.683 clat percentiles (usec): 00:15:54.683 | 50.000th=[ 1237], 99.000th=[ 3982], 99.900th=[ 5276], 99.990th=[ 6521], 00:15:54.683 | 99.999th=[ 8356] 00:15:54.683 bw ( KiB/s): min=49026, max=123512, per=100.00%, avg=70503.21, stdev=3581.87, samples=114 00:15:54.683 iops : min=12252, max=30878, avg=17624.79, stdev=895.58, samples=114 00:15:54.683 lat (usec) : 100=0.03%, 250=5.43%, 500=13.12%, 750=14.43%, 1000=12.93% 00:15:54.683 lat (msec) : 2=38.82%, 4=14.59%, 10=0.65% 00:15:54.683 cpu : usr=39.58%, sys=34.66%, ctx=6322, majf=0, minf=16524 00:15:54.683 IO depths : 1=11.2%, 2=23.6%, 4=51.3%, 8=13.9%, 16=0.0%, 32=0.0%, >=64=0.0% 00:15:54.683 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.683 complete : 0=0.0%, 4=89.2%, 8=10.8%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:15:54.683 issued rwts: total=172032,174506,0,0 short=0,0,0,0 dropped=0,0,0,0 00:15:54.683 latency : target=0, window=0, percentile=100.00%, depth=8 00:15:54.683 00:15:54.683 Run status group 0 (all jobs): 00:15:54.683 READ: bw=67.2MiB/s (70.5MB/s), 67.2MiB/s-67.2MiB/s (70.5MB/s-70.5MB/s), io=672MiB (705MB), run=10002-10002msec 00:15:54.683 WRITE: bw=68.2MiB/s (71.5MB/s), 68.2MiB/s-68.2MiB/s (71.5MB/s-71.5MB/s), io=682MiB (715MB), run=10002-10002msec 00:15:54.683 ----------------------------------------------------- 00:15:54.683 Suppressions used: 00:15:54.683 count bytes template 00:15:54.683 6 48 /usr/src/fio/parse.c 00:15:54.683 2344 225024 /usr/src/fio/iolog.c 00:15:54.683 1 8 libtcmalloc_minimal.so 00:15:54.683 1 904 libcrypto.so 00:15:54.683 ----------------------------------------------------- 00:15:54.683 00:15:54.683 00:15:54.683 real 0m12.135s 00:15:54.683 user 0m25.373s 00:15:54.683 sys 0m21.159s 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.683 ************************************ 00:15:54.683 END TEST bdev_fio_rw_verify 00:15:54.683 ************************************ 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio.bdev_fio_rw_verify -- common/autotest_common.sh@10 -- # set +x 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@349 -- # rm -f 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@350 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@353 -- # fio_config_gen /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio trim '' '' 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1284 -- # local config_file=/home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1285 -- # local workload=trim 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1286 -- # local bdev_type= 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1287 -- # local env_context= 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1288 -- # local fio_dir=/usr/src/fio 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1290 -- # '[' -e /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio ']' 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1295 -- # '[' -z trim ']' 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1299 -- # '[' -n '' ']' 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1303 -- # touch /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1305 -- # cat 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1317 -- # '[' trim == verify ']' 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1332 -- # '[' trim == trim ']' 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1333 -- # echo rw=trimwrite 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # jq -r 'select(.supported_io_types.unmap == true) | .name' 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # printf '%s\n' '{' ' "name": "nvme0n1",' ' "aliases": [' ' "c0874012-d8a8-4bf6-87d6-9071d382b180"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "c0874012-d8a8-4bf6-87d6-9071d382b180",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n2",' ' "aliases": [' ' "86cad771-a679-43e1-8b1a-679e78ec2911"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "86cad771-a679-43e1-8b1a-679e78ec2911",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme0n3",' ' "aliases": [' ' "d60e7b67-de8c-46e9-9574-7899f1719bf6"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1048576,' ' "uuid": "d60e7b67-de8c-46e9-9574-7899f1719bf6",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme1n1",' ' "aliases": [' ' "0ac21dff-70e6-48b7-8c94-6e5ff34d5c1f"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1548666,' ' "uuid": "0ac21dff-70e6-48b7-8c94-6e5ff34d5c1f",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme2n1",' ' "aliases": [' ' "cb9abdfb-86a6-4f21-893c-87f6f644c936"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 262144,' ' "uuid": "cb9abdfb-86a6-4f21-893c-87f6f644c936",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' '{' ' "name": "nvme3n1",' ' "aliases": [' ' "67ca49b3-41ab-4679-87c8-8b9d96b851f0"' ' ],' ' "product_name": "xNVMe bdev",' ' "block_size": 4096,' ' "num_blocks": 1310720,' ' "uuid": "67ca49b3-41ab-4679-87c8-8b9d96b851f0",' ' "assigned_rate_limits": {' ' "rw_ios_per_sec": 0,' ' "rw_mbytes_per_sec": 0,' ' "r_mbytes_per_sec": 0,' ' "w_mbytes_per_sec": 0' ' },' ' "claimed": false,' ' "zoned": false,' ' "supported_io_types": {' ' "read": true,' ' "write": true,' ' "unmap": false,' ' "flush": false,' ' "reset": false,' ' "nvme_admin": false,' ' "nvme_io": false,' ' "nvme_io_md": false,' ' "write_zeroes": true,' ' "zcopy": false,' ' "get_zone_info": false,' ' "zone_management": false,' ' "zone_append": false,' ' "compare": false,' ' "compare_and_write": false,' ' "abort": false,' ' "seek_hole": false,' ' "seek_data": false,' ' "copy": false,' ' "nvme_iov_md": false' ' },' ' "driver_specific": {}' '}' 00:15:54.683 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@354 -- # [[ -n '' ]] 00:15:54.684 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@360 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.fio 00:15:54.684 /home/vagrant/spdk_repo/spdk 00:15:54.684 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@361 -- # popd 00:15:54.684 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@362 -- # trap - SIGINT SIGTERM EXIT 00:15:54.684 02:11:19 blockdev_xnvme.bdev_fio -- bdev/blockdev.sh@363 -- # return 0 00:15:54.684 00:15:54.684 real 0m12.303s 00:15:54.684 user 0m25.439s 00:15:54.684 sys 0m21.240s 00:15:54.684 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@1130 -- # xtrace_disable 00:15:54.684 02:11:19 blockdev_xnvme.bdev_fio -- common/autotest_common.sh@10 -- # set +x 00:15:54.684 ************************************ 00:15:54.684 END TEST bdev_fio 00:15:54.684 ************************************ 00:15:54.684 02:11:19 blockdev_xnvme -- bdev/blockdev.sh@812 -- # trap cleanup SIGINT SIGTERM EXIT 00:15:54.684 02:11:19 blockdev_xnvme -- bdev/blockdev.sh@814 -- # run_test bdev_verify /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:54.684 02:11:19 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:15:54.684 02:11:19 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:15:54.684 02:11:19 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:15:54.684 ************************************ 00:15:54.684 START TEST bdev_verify 00:15:54.684 ************************************ 00:15:54.684 02:11:19 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w verify -t 5 -C -m 0x3 '' 00:15:54.684 [2024-12-15 02:11:19.304442] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:15:54.684 [2024-12-15 02:11:19.304574] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74712 ] 00:15:54.944 [2024-12-15 02:11:19.469129] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:15:54.944 [2024-12-15 02:11:19.614827] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:15:54.944 [2024-12-15 02:11:19.614944] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:15:55.517 Running I/O for 5 seconds... 00:15:57.848 24928.00 IOPS, 97.38 MiB/s [2024-12-15T02:11:23.557Z] 23792.00 IOPS, 92.94 MiB/s [2024-12-15T02:11:24.501Z] 23955.67 IOPS, 93.58 MiB/s [2024-12-15T02:11:25.443Z] 24488.00 IOPS, 95.66 MiB/s [2024-12-15T02:11:25.443Z] 24473.60 IOPS, 95.60 MiB/s 00:16:00.678 Latency(us) 00:16:00.678 [2024-12-15T02:11:25.443Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:00.678 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x0 length 0x80000 00:16:00.678 nvme0n1 : 5.05 2052.52 8.02 0.00 0.00 62262.58 14821.22 60091.47 00:16:00.678 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x80000 length 0x80000 00:16:00.678 nvme0n1 : 5.07 1766.74 6.90 0.00 0.00 72303.69 7259.37 74206.92 00:16:00.678 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x0 length 0x80000 00:16:00.678 nvme0n2 : 5.04 2057.32 8.04 0.00 0.00 62020.72 10687.41 68560.74 00:16:00.678 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x80000 length 0x80000 00:16:00.678 nvme0n2 : 5.11 1754.81 6.85 0.00 0.00 72633.71 10233.70 68157.44 00:16:00.678 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x0 length 0x80000 00:16:00.678 nvme0n3 : 5.06 2047.85 8.00 0.00 0.00 62206.46 12905.55 62914.56 00:16:00.678 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x80000 length 0x80000 00:16:00.678 nvme0n3 : 5.09 1761.06 6.88 0.00 0.00 72205.69 11141.12 69770.63 00:16:00.678 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x0 length 0xbd0bd 00:16:00.678 nvme1n1 : 5.06 2641.33 10.32 0.00 0.00 48056.73 6755.25 51622.20 00:16:00.678 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0xbd0bd length 0xbd0bd 00:16:00.678 nvme1n1 : 5.10 2522.64 9.85 0.00 0.00 50222.62 5494.94 60898.07 00:16:00.678 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x0 length 0x20000 00:16:00.678 nvme2n1 : 5.05 2078.85 8.12 0.00 0.00 61095.77 9124.63 59688.17 00:16:00.678 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x20000 length 0x20000 00:16:00.678 nvme2n1 : 5.11 1777.53 6.94 0.00 0.00 71240.54 6956.90 65334.35 00:16:00.678 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0x0 length 0xa0000 00:16:00.678 nvme3n1 : 5.07 2046.25 7.99 0.00 0.00 61968.26 5142.06 67350.84 00:16:00.678 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 4096) 00:16:00.678 Verification LBA range: start 0xa0000 length 0xa0000 00:16:00.678 nvme3n1 : 5.10 1755.85 6.86 0.00 0.00 72014.61 5721.80 70980.53 00:16:00.678 [2024-12-15T02:11:25.443Z] =================================================================================================================== 00:16:00.678 [2024-12-15T02:11:25.443Z] Total : 24262.74 94.78 0.00 0.00 62893.83 5142.06 74206.92 00:16:01.621 00:16:01.621 real 0m6.933s 00:16:01.621 user 0m11.039s 00:16:01.621 sys 0m1.643s 00:16:01.621 ************************************ 00:16:01.621 02:11:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:01.621 02:11:26 blockdev_xnvme.bdev_verify -- common/autotest_common.sh@10 -- # set +x 00:16:01.621 END TEST bdev_verify 00:16:01.621 ************************************ 00:16:01.621 02:11:26 blockdev_xnvme -- bdev/blockdev.sh@815 -- # run_test bdev_verify_big_io /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:01.621 02:11:26 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 16 -le 1 ']' 00:16:01.621 02:11:26 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:01.621 02:11:26 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:01.621 ************************************ 00:16:01.621 START TEST bdev_verify_big_io 00:16:01.621 ************************************ 00:16:01.621 02:11:26 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 65536 -w verify -t 5 -C -m 0x3 '' 00:16:01.621 [2024-12-15 02:11:26.310949] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:01.621 [2024-12-15 02:11:26.311094] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74814 ] 00:16:01.882 [2024-12-15 02:11:26.476803] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:01.882 [2024-12-15 02:11:26.615375] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:01.882 [2024-12-15 02:11:26.615491] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:02.454 Running I/O for 5 seconds... 00:16:08.574 1944.00 IOPS, 121.50 MiB/s [2024-12-15T02:11:33.913Z] 3012.00 IOPS, 188.25 MiB/s 00:16:09.148 Latency(us) 00:16:09.148 [2024-12-15T02:11:33.913Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:09.148 Job: nvme0n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x0 length 0x8000 00:16:09.148 nvme0n1 : 5.59 137.42 8.59 0.00 0.00 901926.33 44766.13 1090519.04 00:16:09.148 Job: nvme0n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x8000 length 0x8000 00:16:09.148 nvme0n1 : 6.02 95.75 5.98 0.00 0.00 1289005.25 101631.21 1613193.85 00:16:09.148 Job: nvme0n2 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x0 length 0x8000 00:16:09.148 nvme0n2 : 5.51 139.39 8.71 0.00 0.00 845522.31 15325.34 864671.90 00:16:09.148 Job: nvme0n2 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x8000 length 0x8000 00:16:09.148 nvme0n2 : 6.02 63.81 3.99 0.00 0.00 1819626.47 68964.04 3045709.98 00:16:09.148 Job: nvme0n3 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x0 length 0x8000 00:16:09.148 nvme0n3 : 5.66 138.46 8.65 0.00 0.00 851718.51 98808.12 1690627.15 00:16:09.148 Job: nvme0n3 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x8000 length 0x8000 00:16:09.148 nvme0n3 : 6.02 106.27 6.64 0.00 0.00 1049991.33 48597.46 1193763.45 00:16:09.148 Job: nvme1n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x0 length 0xbd0b 00:16:09.148 nvme1n1 : 5.73 164.63 10.29 0.00 0.00 697479.48 139541.27 974369.08 00:16:09.148 Job: nvme1n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0xbd0b length 0xbd0b 00:16:09.148 nvme1n1 : 6.09 138.72 8.67 0.00 0.00 762326.15 8166.79 1025991.29 00:16:09.148 Job: nvme2n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x0 length 0x2000 00:16:09.148 nvme2n1 : 5.79 143.60 8.98 0.00 0.00 780832.84 77030.01 1819682.66 00:16:09.148 Job: nvme2n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x2000 length 0x2000 00:16:09.148 nvme2n1 : 6.14 138.11 8.63 0.00 0.00 739249.16 2054.30 877577.45 00:16:09.148 Job: nvme3n1 (Core Mask 0x1, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0x0 length 0xa000 00:16:09.148 nvme3n1 : 5.88 186.54 11.66 0.00 0.00 590167.17 1020.85 916294.10 00:16:09.148 Job: nvme3n1 (Core Mask 0x2, workload: verify, depth: 128, IO size: 65536) 00:16:09.148 Verification LBA range: start 0xa000 length 0xa000 00:16:09.148 nvme3n1 : 6.38 210.66 13.17 0.00 0.00 465593.72 409.60 3252198.79 00:16:09.148 [2024-12-15T02:11:33.913Z] =================================================================================================================== 00:16:09.148 [2024-12-15T02:11:33.913Z] Total : 1663.37 103.96 0.00 0.00 811342.92 409.60 3252198.79 00:16:09.719 00:16:09.719 real 0m8.121s 00:16:09.719 user 0m14.852s 00:16:09.719 sys 0m0.495s 00:16:09.719 ************************************ 00:16:09.719 END TEST bdev_verify_big_io 00:16:09.719 ************************************ 00:16:09.719 02:11:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:09.719 02:11:34 blockdev_xnvme.bdev_verify_big_io -- common/autotest_common.sh@10 -- # set +x 00:16:09.719 02:11:34 blockdev_xnvme -- bdev/blockdev.sh@816 -- # run_test bdev_write_zeroes /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:09.719 02:11:34 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:09.719 02:11:34 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:09.719 02:11:34 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:09.719 ************************************ 00:16:09.719 START TEST bdev_write_zeroes 00:16:09.719 ************************************ 00:16:09.720 02:11:34 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:09.720 [2024-12-15 02:11:34.479763] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:09.720 [2024-12-15 02:11:34.479864] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74929 ] 00:16:09.981 [2024-12-15 02:11:34.631432] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:09.981 [2024-12-15 02:11:34.722311] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:10.553 Running I/O for 1 seconds... 00:16:11.492 95808.00 IOPS, 374.25 MiB/s 00:16:11.492 Latency(us) 00:16:11.492 [2024-12-15T02:11:36.257Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:16:11.492 Job: nvme0n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.492 nvme0n1 : 1.02 15751.04 61.53 0.00 0.00 8118.30 5570.56 18148.43 00:16:11.492 Job: nvme0n2 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.492 nvme0n2 : 1.02 15732.79 61.46 0.00 0.00 8122.83 5620.97 18450.90 00:16:11.492 Job: nvme0n3 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.492 nvme0n3 : 1.02 15714.11 61.38 0.00 0.00 8127.20 5620.97 18854.20 00:16:11.492 Job: nvme1n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.492 nvme1n1 : 1.02 16303.63 63.69 0.00 0.00 7827.79 3503.66 15325.34 00:16:11.492 Job: nvme2n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.492 nvme2n1 : 1.02 15694.64 61.31 0.00 0.00 8126.85 5419.32 19156.68 00:16:11.492 Job: nvme3n1 (Core Mask 0x1, workload: write_zeroes, depth: 128, IO size: 4096) 00:16:11.492 nvme3n1 : 1.02 15676.35 61.24 0.00 0.00 8089.76 4864.79 19559.98 00:16:11.492 [2024-12-15T02:11:36.257Z] =================================================================================================================== 00:16:11.492 [2024-12-15T02:11:36.257Z] Total : 94872.57 370.60 0.00 0.00 8067.03 3503.66 19559.98 00:16:12.064 00:16:12.064 real 0m2.289s 00:16:12.064 user 0m1.674s 00:16:12.064 sys 0m0.445s 00:16:12.064 02:11:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.064 ************************************ 00:16:12.064 END TEST bdev_write_zeroes 00:16:12.064 ************************************ 00:16:12.064 02:11:36 blockdev_xnvme.bdev_write_zeroes -- common/autotest_common.sh@10 -- # set +x 00:16:12.064 02:11:36 blockdev_xnvme -- bdev/blockdev.sh@819 -- # run_test bdev_json_nonenclosed /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.064 02:11:36 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:12.064 02:11:36 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.064 02:11:36 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.064 ************************************ 00:16:12.064 START TEST bdev_json_nonenclosed 00:16:12.064 ************************************ 00:16:12.064 02:11:36 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonenclosed.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.324 [2024-12-15 02:11:36.839219] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:12.324 [2024-12-15 02:11:36.839351] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74971 ] 00:16:12.324 [2024-12-15 02:11:36.999068] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.324 [2024-12-15 02:11:37.083483] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.324 [2024-12-15 02:11:37.083560] json_config.c: 608:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: not enclosed in {}. 00:16:12.324 [2024-12-15 02:11:37.083574] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:12.324 [2024-12-15 02:11:37.083583] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:12.585 00:16:12.585 real 0m0.461s 00:16:12.585 user 0m0.251s 00:16:12.585 sys 0m0.107s 00:16:12.585 02:11:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:12.585 ************************************ 00:16:12.585 END TEST bdev_json_nonenclosed 00:16:12.585 ************************************ 00:16:12.585 02:11:37 blockdev_xnvme.bdev_json_nonenclosed -- common/autotest_common.sh@10 -- # set +x 00:16:12.585 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@822 -- # run_test bdev_json_nonarray /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.585 02:11:37 blockdev_xnvme -- common/autotest_common.sh@1105 -- # '[' 13 -le 1 ']' 00:16:12.585 02:11:37 blockdev_xnvme -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:12.585 02:11:37 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:12.585 ************************************ 00:16:12.585 START TEST bdev_json_nonarray 00:16:12.585 ************************************ 00:16:12.585 02:11:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf --json /home/vagrant/spdk_repo/spdk/test/bdev/nonarray.json -q 128 -o 4096 -w write_zeroes -t 1 '' 00:16:12.847 [2024-12-15 02:11:37.353039] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:12.847 [2024-12-15 02:11:37.353160] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid74996 ] 00:16:12.847 [2024-12-15 02:11:37.509042] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:12.847 [2024-12-15 02:11:37.593437] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:12.847 [2024-12-15 02:11:37.593515] json_config.c: 614:json_config_prepare_ctx: *ERROR*: Invalid JSON configuration: 'subsystems' should be an array. 00:16:12.847 [2024-12-15 02:11:37.593530] rpc.c: 190:spdk_rpc_server_finish: *ERROR*: No server listening on provided address: 00:16:12.847 [2024-12-15 02:11:37.593538] app.c:1064:spdk_app_stop: *WARNING*: spdk_app_stop'd on non-zero 00:16:13.107 00:16:13.107 real 0m0.440s 00:16:13.107 user 0m0.246s 00:16:13.107 sys 0m0.090s 00:16:13.107 02:11:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:13.107 02:11:37 blockdev_xnvme.bdev_json_nonarray -- common/autotest_common.sh@10 -- # set +x 00:16:13.107 ************************************ 00:16:13.107 END TEST bdev_json_nonarray 00:16:13.107 ************************************ 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@824 -- # [[ xnvme == bdev ]] 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@832 -- # [[ xnvme == gpt ]] 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@836 -- # [[ xnvme == crypto_sw ]] 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@848 -- # trap - SIGINT SIGTERM EXIT 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@849 -- # cleanup 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@23 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/aiofile 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@24 -- # rm -f /home/vagrant/spdk_repo/spdk/test/bdev/bdev.json 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@26 -- # [[ xnvme == rbd ]] 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@30 -- # [[ xnvme == daos ]] 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@34 -- # [[ xnvme = \g\p\t ]] 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@40 -- # [[ xnvme == xnvme ]] 00:16:13.107 02:11:37 blockdev_xnvme -- bdev/blockdev.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:16:13.678 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:16:17.048 0000:00:10.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.310 0000:00:11.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.310 0000:00:13.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.571 0000:00:12.0 (1b36 0010): nvme -> uio_pci_generic 00:16:17.571 00:16:17.571 real 0m54.685s 00:16:17.571 user 1m20.248s 00:16:17.571 sys 0m34.767s 00:16:17.571 02:11:42 blockdev_xnvme -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:17.571 ************************************ 00:16:17.571 END TEST blockdev_xnvme 00:16:17.571 ************************************ 00:16:17.571 02:11:42 blockdev_xnvme -- common/autotest_common.sh@10 -- # set +x 00:16:17.571 02:11:42 -- spdk/autotest.sh@247 -- # run_test ublk /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:17.571 02:11:42 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.571 02:11:42 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.571 02:11:42 -- common/autotest_common.sh@10 -- # set +x 00:16:17.571 ************************************ 00:16:17.571 START TEST ublk 00:16:17.571 ************************************ 00:16:17.571 02:11:42 ublk -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk.sh 00:16:17.571 * Looking for test storage... 00:16:17.571 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:17.571 02:11:42 ublk -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:17.571 02:11:42 ublk -- common/autotest_common.sh@1711 -- # lcov --version 00:16:17.571 02:11:42 ublk -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:17.832 02:11:42 ublk -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:17.832 02:11:42 ublk -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:17.832 02:11:42 ublk -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:17.832 02:11:42 ublk -- scripts/common.sh@336 -- # IFS=.-: 00:16:17.832 02:11:42 ublk -- scripts/common.sh@336 -- # read -ra ver1 00:16:17.832 02:11:42 ublk -- scripts/common.sh@337 -- # IFS=.-: 00:16:17.832 02:11:42 ublk -- scripts/common.sh@337 -- # read -ra ver2 00:16:17.832 02:11:42 ublk -- scripts/common.sh@338 -- # local 'op=<' 00:16:17.832 02:11:42 ublk -- scripts/common.sh@340 -- # ver1_l=2 00:16:17.832 02:11:42 ublk -- scripts/common.sh@341 -- # ver2_l=1 00:16:17.832 02:11:42 ublk -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:17.832 02:11:42 ublk -- scripts/common.sh@344 -- # case "$op" in 00:16:17.832 02:11:42 ublk -- scripts/common.sh@345 -- # : 1 00:16:17.832 02:11:42 ublk -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:17.832 02:11:42 ublk -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:17.832 02:11:42 ublk -- scripts/common.sh@365 -- # decimal 1 00:16:17.832 02:11:42 ublk -- scripts/common.sh@353 -- # local d=1 00:16:17.832 02:11:42 ublk -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:17.832 02:11:42 ublk -- scripts/common.sh@355 -- # echo 1 00:16:17.832 02:11:42 ublk -- scripts/common.sh@365 -- # ver1[v]=1 00:16:17.832 02:11:42 ublk -- scripts/common.sh@366 -- # decimal 2 00:16:17.832 02:11:42 ublk -- scripts/common.sh@353 -- # local d=2 00:16:17.832 02:11:42 ublk -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:17.832 02:11:42 ublk -- scripts/common.sh@355 -- # echo 2 00:16:17.832 02:11:42 ublk -- scripts/common.sh@366 -- # ver2[v]=2 00:16:17.832 02:11:42 ublk -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:17.832 02:11:42 ublk -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:17.832 02:11:42 ublk -- scripts/common.sh@368 -- # return 0 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:17.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.832 --rc genhtml_branch_coverage=1 00:16:17.832 --rc genhtml_function_coverage=1 00:16:17.832 --rc genhtml_legend=1 00:16:17.832 --rc geninfo_all_blocks=1 00:16:17.832 --rc geninfo_unexecuted_blocks=1 00:16:17.832 00:16:17.832 ' 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:17.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.832 --rc genhtml_branch_coverage=1 00:16:17.832 --rc genhtml_function_coverage=1 00:16:17.832 --rc genhtml_legend=1 00:16:17.832 --rc geninfo_all_blocks=1 00:16:17.832 --rc geninfo_unexecuted_blocks=1 00:16:17.832 00:16:17.832 ' 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:17.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.832 --rc genhtml_branch_coverage=1 00:16:17.832 --rc genhtml_function_coverage=1 00:16:17.832 --rc genhtml_legend=1 00:16:17.832 --rc geninfo_all_blocks=1 00:16:17.832 --rc geninfo_unexecuted_blocks=1 00:16:17.832 00:16:17.832 ' 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:17.832 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:17.832 --rc genhtml_branch_coverage=1 00:16:17.832 --rc genhtml_function_coverage=1 00:16:17.832 --rc genhtml_legend=1 00:16:17.832 --rc geninfo_all_blocks=1 00:16:17.832 --rc geninfo_unexecuted_blocks=1 00:16:17.832 00:16:17.832 ' 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:17.832 02:11:42 ublk -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:17.832 02:11:42 ublk -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:17.832 02:11:42 ublk -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:17.832 02:11:42 ublk -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:17.832 02:11:42 ublk -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:17.832 02:11:42 ublk -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:17.832 02:11:42 ublk -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:17.832 02:11:42 ublk -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@11 -- # [[ -z '' ]] 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@12 -- # NUM_DEVS=4 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@13 -- # NUM_QUEUE=4 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@14 -- # QUEUE_DEPTH=512 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@15 -- # MALLOC_SIZE_MB=128 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@17 -- # STOP_DISKS=1 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@27 -- # MALLOC_BS=4096 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@28 -- # FILE_SIZE=134217728 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@29 -- # MAX_DEV_ID=3 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@133 -- # modprobe ublk_drv 00:16:17.832 02:11:42 ublk -- ublk/ublk.sh@136 -- # run_test test_save_ublk_config test_save_config 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:17.832 02:11:42 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:17.832 ************************************ 00:16:17.832 START TEST test_save_ublk_config 00:16:17.832 ************************************ 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- common/autotest_common.sh@1129 -- # test_save_config 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- ublk/ublk.sh@100 -- # local tgtpid blkpath config 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- ublk/ublk.sh@103 -- # tgtpid=75292 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- ublk/ublk.sh@104 -- # trap 'killprocess $tgtpid' EXIT 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- ublk/ublk.sh@102 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- ublk/ublk.sh@106 -- # waitforlisten 75292 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75292 ']' 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:17.832 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:17.832 02:11:42 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:17.832 [2024-12-15 02:11:42.520860] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:17.832 [2024-12-15 02:11:42.521209] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75292 ] 00:16:18.094 [2024-12-15 02:11:42.679025] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:18.094 [2024-12-15 02:11:42.829817] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- ublk/ublk.sh@107 -- # blkpath=/dev/ublkb0 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- ublk/ublk.sh@108 -- # rpc_cmd 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:19.037 [2024-12-15 02:11:43.674233] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:19.037 [2024-12-15 02:11:43.675250] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:19.037 malloc0 00:16:19.037 [2024-12-15 02:11:43.754380] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:19.037 [2024-12-15 02:11:43.754496] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:19.037 [2024-12-15 02:11:43.754509] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:19.037 [2024-12-15 02:11:43.754518] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:19.037 [2024-12-15 02:11:43.762465] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:19.037 [2024-12-15 02:11:43.762505] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:19.037 [2024-12-15 02:11:43.770242] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:19.037 [2024-12-15 02:11:43.770384] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:19.037 [2024-12-15 02:11:43.786237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:19.037 0 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # rpc_cmd save_config 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:19.037 02:11:43 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:19.611 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:19.611 02:11:44 ublk.test_save_ublk_config -- ublk/ublk.sh@115 -- # config='{ 00:16:19.611 "subsystems": [ 00:16:19.611 { 00:16:19.611 "subsystem": "fsdev", 00:16:19.611 "config": [ 00:16:19.611 { 00:16:19.611 "method": "fsdev_set_opts", 00:16:19.611 "params": { 00:16:19.611 "fsdev_io_pool_size": 65535, 00:16:19.611 "fsdev_io_cache_size": 256 00:16:19.611 } 00:16:19.611 } 00:16:19.611 ] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "keyring", 00:16:19.611 "config": [] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "iobuf", 00:16:19.611 "config": [ 00:16:19.611 { 00:16:19.611 "method": "iobuf_set_options", 00:16:19.611 "params": { 00:16:19.611 "small_pool_count": 8192, 00:16:19.611 "large_pool_count": 1024, 00:16:19.611 "small_bufsize": 8192, 00:16:19.611 "large_bufsize": 135168, 00:16:19.611 "enable_numa": false 00:16:19.611 } 00:16:19.611 } 00:16:19.611 ] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "sock", 00:16:19.611 "config": [ 00:16:19.611 { 00:16:19.611 "method": "sock_set_default_impl", 00:16:19.611 "params": { 00:16:19.611 "impl_name": "posix" 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "sock_impl_set_options", 00:16:19.611 "params": { 00:16:19.611 "impl_name": "ssl", 00:16:19.611 "recv_buf_size": 4096, 00:16:19.611 "send_buf_size": 4096, 00:16:19.611 "enable_recv_pipe": true, 00:16:19.611 "enable_quickack": false, 00:16:19.611 "enable_placement_id": 0, 00:16:19.611 "enable_zerocopy_send_server": true, 00:16:19.611 "enable_zerocopy_send_client": false, 00:16:19.611 "zerocopy_threshold": 0, 00:16:19.611 "tls_version": 0, 00:16:19.611 "enable_ktls": false 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "sock_impl_set_options", 00:16:19.611 "params": { 00:16:19.611 "impl_name": "posix", 00:16:19.611 "recv_buf_size": 2097152, 00:16:19.611 "send_buf_size": 2097152, 00:16:19.611 "enable_recv_pipe": true, 00:16:19.611 "enable_quickack": false, 00:16:19.611 "enable_placement_id": 0, 00:16:19.611 "enable_zerocopy_send_server": true, 00:16:19.611 "enable_zerocopy_send_client": false, 00:16:19.611 "zerocopy_threshold": 0, 00:16:19.611 "tls_version": 0, 00:16:19.611 "enable_ktls": false 00:16:19.611 } 00:16:19.611 } 00:16:19.611 ] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "vmd", 00:16:19.611 "config": [] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "accel", 00:16:19.611 "config": [ 00:16:19.611 { 00:16:19.611 "method": "accel_set_options", 00:16:19.611 "params": { 00:16:19.611 "small_cache_size": 128, 00:16:19.611 "large_cache_size": 16, 00:16:19.611 "task_count": 2048, 00:16:19.611 "sequence_count": 2048, 00:16:19.611 "buf_count": 2048 00:16:19.611 } 00:16:19.611 } 00:16:19.611 ] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "bdev", 00:16:19.611 "config": [ 00:16:19.611 { 00:16:19.611 "method": "bdev_set_options", 00:16:19.611 "params": { 00:16:19.611 "bdev_io_pool_size": 65535, 00:16:19.611 "bdev_io_cache_size": 256, 00:16:19.611 "bdev_auto_examine": true, 00:16:19.611 "iobuf_small_cache_size": 128, 00:16:19.611 "iobuf_large_cache_size": 16 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "bdev_raid_set_options", 00:16:19.611 "params": { 00:16:19.611 "process_window_size_kb": 1024, 00:16:19.611 "process_max_bandwidth_mb_sec": 0 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "bdev_iscsi_set_options", 00:16:19.611 "params": { 00:16:19.611 "timeout_sec": 30 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "bdev_nvme_set_options", 00:16:19.611 "params": { 00:16:19.611 "action_on_timeout": "none", 00:16:19.611 "timeout_us": 0, 00:16:19.611 "timeout_admin_us": 0, 00:16:19.611 "keep_alive_timeout_ms": 10000, 00:16:19.611 "arbitration_burst": 0, 00:16:19.611 "low_priority_weight": 0, 00:16:19.611 "medium_priority_weight": 0, 00:16:19.611 "high_priority_weight": 0, 00:16:19.611 "nvme_adminq_poll_period_us": 10000, 00:16:19.611 "nvme_ioq_poll_period_us": 0, 00:16:19.611 "io_queue_requests": 0, 00:16:19.611 "delay_cmd_submit": true, 00:16:19.611 "transport_retry_count": 4, 00:16:19.611 "bdev_retry_count": 3, 00:16:19.611 "transport_ack_timeout": 0, 00:16:19.611 "ctrlr_loss_timeout_sec": 0, 00:16:19.611 "reconnect_delay_sec": 0, 00:16:19.611 "fast_io_fail_timeout_sec": 0, 00:16:19.611 "disable_auto_failback": false, 00:16:19.611 "generate_uuids": false, 00:16:19.611 "transport_tos": 0, 00:16:19.611 "nvme_error_stat": false, 00:16:19.611 "rdma_srq_size": 0, 00:16:19.611 "io_path_stat": false, 00:16:19.611 "allow_accel_sequence": false, 00:16:19.611 "rdma_max_cq_size": 0, 00:16:19.611 "rdma_cm_event_timeout_ms": 0, 00:16:19.611 "dhchap_digests": [ 00:16:19.611 "sha256", 00:16:19.611 "sha384", 00:16:19.611 "sha512" 00:16:19.611 ], 00:16:19.611 "dhchap_dhgroups": [ 00:16:19.611 "null", 00:16:19.611 "ffdhe2048", 00:16:19.611 "ffdhe3072", 00:16:19.611 "ffdhe4096", 00:16:19.611 "ffdhe6144", 00:16:19.611 "ffdhe8192" 00:16:19.611 ], 00:16:19.611 "rdma_umr_per_io": false 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "bdev_nvme_set_hotplug", 00:16:19.611 "params": { 00:16:19.611 "period_us": 100000, 00:16:19.611 "enable": false 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "bdev_malloc_create", 00:16:19.611 "params": { 00:16:19.611 "name": "malloc0", 00:16:19.611 "num_blocks": 8192, 00:16:19.611 "block_size": 4096, 00:16:19.611 "physical_block_size": 4096, 00:16:19.611 "uuid": "8478f8f4-1c56-4776-ac88-9c6ced4dea6c", 00:16:19.611 "optimal_io_boundary": 0, 00:16:19.611 "md_size": 0, 00:16:19.611 "dif_type": 0, 00:16:19.611 "dif_is_head_of_md": false, 00:16:19.611 "dif_pi_format": 0 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "bdev_wait_for_examine" 00:16:19.611 } 00:16:19.611 ] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "scsi", 00:16:19.611 "config": null 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "scheduler", 00:16:19.611 "config": [ 00:16:19.611 { 00:16:19.611 "method": "framework_set_scheduler", 00:16:19.611 "params": { 00:16:19.611 "name": "static" 00:16:19.611 } 00:16:19.611 } 00:16:19.611 ] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "vhost_scsi", 00:16:19.611 "config": [] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "vhost_blk", 00:16:19.611 "config": [] 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "subsystem": "ublk", 00:16:19.611 "config": [ 00:16:19.611 { 00:16:19.611 "method": "ublk_create_target", 00:16:19.611 "params": { 00:16:19.611 "cpumask": "1" 00:16:19.611 } 00:16:19.611 }, 00:16:19.611 { 00:16:19.611 "method": "ublk_start_disk", 00:16:19.611 "params": { 00:16:19.611 "bdev_name": "malloc0", 00:16:19.611 "ublk_id": 0, 00:16:19.611 "num_queues": 1, 00:16:19.611 "queue_depth": 128 00:16:19.611 } 00:16:19.612 } 00:16:19.612 ] 00:16:19.612 }, 00:16:19.612 { 00:16:19.612 "subsystem": "nbd", 00:16:19.612 "config": [] 00:16:19.612 }, 00:16:19.612 { 00:16:19.612 "subsystem": "nvmf", 00:16:19.612 "config": [ 00:16:19.612 { 00:16:19.612 "method": "nvmf_set_config", 00:16:19.612 "params": { 00:16:19.612 "discovery_filter": "match_any", 00:16:19.612 "admin_cmd_passthru": { 00:16:19.612 "identify_ctrlr": false 00:16:19.612 }, 00:16:19.612 "dhchap_digests": [ 00:16:19.612 "sha256", 00:16:19.612 "sha384", 00:16:19.612 "sha512" 00:16:19.612 ], 00:16:19.612 "dhchap_dhgroups": [ 00:16:19.612 "null", 00:16:19.612 "ffdhe2048", 00:16:19.612 "ffdhe3072", 00:16:19.612 "ffdhe4096", 00:16:19.612 "ffdhe6144", 00:16:19.612 "ffdhe8192" 00:16:19.612 ] 00:16:19.612 } 00:16:19.612 }, 00:16:19.612 { 00:16:19.612 "method": "nvmf_set_max_subsystems", 00:16:19.612 "params": { 00:16:19.612 "max_subsystems": 1024 00:16:19.612 } 00:16:19.612 }, 00:16:19.612 { 00:16:19.612 "method": "nvmf_set_crdt", 00:16:19.612 "params": { 00:16:19.612 "crdt1": 0, 00:16:19.612 "crdt2": 0, 00:16:19.612 "crdt3": 0 00:16:19.612 } 00:16:19.612 } 00:16:19.612 ] 00:16:19.612 }, 00:16:19.612 { 00:16:19.612 "subsystem": "iscsi", 00:16:19.612 "config": [ 00:16:19.612 { 00:16:19.612 "method": "iscsi_set_options", 00:16:19.612 "params": { 00:16:19.612 "node_base": "iqn.2016-06.io.spdk", 00:16:19.612 "max_sessions": 128, 00:16:19.612 "max_connections_per_session": 2, 00:16:19.612 "max_queue_depth": 64, 00:16:19.612 "default_time2wait": 2, 00:16:19.612 "default_time2retain": 20, 00:16:19.612 "first_burst_length": 8192, 00:16:19.612 "immediate_data": true, 00:16:19.612 "allow_duplicated_isid": false, 00:16:19.612 "error_recovery_level": 0, 00:16:19.612 "nop_timeout": 60, 00:16:19.612 "nop_in_interval": 30, 00:16:19.612 "disable_chap": false, 00:16:19.612 "require_chap": false, 00:16:19.612 "mutual_chap": false, 00:16:19.612 "chap_group": 0, 00:16:19.612 "max_large_datain_per_connection": 64, 00:16:19.612 "max_r2t_per_connection": 4, 00:16:19.612 "pdu_pool_size": 36864, 00:16:19.612 "immediate_data_pool_size": 16384, 00:16:19.612 "data_out_pool_size": 2048 00:16:19.612 } 00:16:19.612 } 00:16:19.612 ] 00:16:19.612 } 00:16:19.612 ] 00:16:19.612 }' 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- ublk/ublk.sh@116 -- # killprocess 75292 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75292 ']' 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75292 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75292 00:16:19.612 killing process with pid 75292 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75292' 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75292 00:16:19.612 02:11:44 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75292 00:16:20.554 [2024-12-15 02:11:45.283292] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:20.814 [2024-12-15 02:11:45.319225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:20.814 [2024-12-15 02:11:45.319430] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:20.814 [2024-12-15 02:11:45.329269] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:20.814 [2024-12-15 02:11:45.329349] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:20.815 [2024-12-15 02:11:45.329367] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:20.815 [2024-12-15 02:11:45.329406] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:20.815 [2024-12-15 02:11:45.329590] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- ublk/ublk.sh@119 -- # tgtpid=75362 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- ublk/ublk.sh@121 -- # waitforlisten 75362 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- common/autotest_common.sh@835 -- # '[' -z 75362 ']' 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:22.201 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ublk -c /dev/fd/63 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:22.201 02:11:46 ublk.test_save_ublk_config -- ublk/ublk.sh@118 -- # echo '{ 00:16:22.201 "subsystems": [ 00:16:22.201 { 00:16:22.201 "subsystem": "fsdev", 00:16:22.201 "config": [ 00:16:22.201 { 00:16:22.201 "method": "fsdev_set_opts", 00:16:22.201 "params": { 00:16:22.201 "fsdev_io_pool_size": 65535, 00:16:22.201 "fsdev_io_cache_size": 256 00:16:22.201 } 00:16:22.201 } 00:16:22.201 ] 00:16:22.201 }, 00:16:22.201 { 00:16:22.201 "subsystem": "keyring", 00:16:22.201 "config": [] 00:16:22.201 }, 00:16:22.201 { 00:16:22.201 "subsystem": "iobuf", 00:16:22.201 "config": [ 00:16:22.201 { 00:16:22.201 "method": "iobuf_set_options", 00:16:22.201 "params": { 00:16:22.201 "small_pool_count": 8192, 00:16:22.201 "large_pool_count": 1024, 00:16:22.201 "small_bufsize": 8192, 00:16:22.201 "large_bufsize": 135168, 00:16:22.201 "enable_numa": false 00:16:22.201 } 00:16:22.201 } 00:16:22.201 ] 00:16:22.201 }, 00:16:22.201 { 00:16:22.201 "subsystem": "sock", 00:16:22.201 "config": [ 00:16:22.201 { 00:16:22.201 "method": "sock_set_default_impl", 00:16:22.202 "params": { 00:16:22.202 "impl_name": "posix" 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "sock_impl_set_options", 00:16:22.202 "params": { 00:16:22.202 "impl_name": "ssl", 00:16:22.202 "recv_buf_size": 4096, 00:16:22.202 "send_buf_size": 4096, 00:16:22.202 "enable_recv_pipe": true, 00:16:22.202 "enable_quickack": false, 00:16:22.202 "enable_placement_id": 0, 00:16:22.202 "enable_zerocopy_send_server": true, 00:16:22.202 "enable_zerocopy_send_client": false, 00:16:22.202 "zerocopy_threshold": 0, 00:16:22.202 "tls_version": 0, 00:16:22.202 "enable_ktls": false 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "sock_impl_set_options", 00:16:22.202 "params": { 00:16:22.202 "impl_name": "posix", 00:16:22.202 "recv_buf_size": 2097152, 00:16:22.202 "send_buf_size": 2097152, 00:16:22.202 "enable_recv_pipe": true, 00:16:22.202 "enable_quickack": false, 00:16:22.202 "enable_placement_id": 0, 00:16:22.202 "enable_zerocopy_send_server": true, 00:16:22.202 "enable_zerocopy_send_client": false, 00:16:22.202 "zerocopy_threshold": 0, 00:16:22.202 "tls_version": 0, 00:16:22.202 "enable_ktls": false 00:16:22.202 } 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "vmd", 00:16:22.202 "config": [] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "accel", 00:16:22.202 "config": [ 00:16:22.202 { 00:16:22.202 "method": "accel_set_options", 00:16:22.202 "params": { 00:16:22.202 "small_cache_size": 128, 00:16:22.202 "large_cache_size": 16, 00:16:22.202 "task_count": 2048, 00:16:22.202 "sequence_count": 2048, 00:16:22.202 "buf_count": 2048 00:16:22.202 } 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "bdev", 00:16:22.202 "config": [ 00:16:22.202 { 00:16:22.202 "method": "bdev_set_options", 00:16:22.202 "params": { 00:16:22.202 "bdev_io_pool_size": 65535, 00:16:22.202 "bdev_io_cache_size": 256, 00:16:22.202 "bdev_auto_examine": true, 00:16:22.202 "iobuf_small_cache_size": 128, 00:16:22.202 "iobuf_large_cache_size": 16 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "bdev_raid_set_options", 00:16:22.202 "params": { 00:16:22.202 "process_window_size_kb": 1024, 00:16:22.202 "process_max_bandwidth_mb_sec": 0 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "bdev_iscsi_set_options", 00:16:22.202 "params": { 00:16:22.202 "timeout_sec": 30 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "bdev_nvme_set_options", 00:16:22.202 "params": { 00:16:22.202 "action_on_timeout": "none", 00:16:22.202 "timeout_us": 0, 00:16:22.202 "timeout_admin_us": 0, 00:16:22.202 "keep_alive_timeout_ms": 10000, 00:16:22.202 "arbitration_burst": 0, 00:16:22.202 "low_priority_weight": 0, 00:16:22.202 "medium_priority_weight": 0, 00:16:22.202 "high_priority_weight": 0, 00:16:22.202 "nvme_adminq_poll_period_us": 10000, 00:16:22.202 "nvme_ioq_poll_period_us": 0, 00:16:22.202 "io_queue_requests": 0, 00:16:22.202 "delay_cmd_submit": true, 00:16:22.202 "transport_retry_count": 4, 00:16:22.202 "bdev_retry_count": 3, 00:16:22.202 "transport_ack_timeout": 0, 00:16:22.202 "ctrlr_loss_timeout_sec": 0, 00:16:22.202 "reconnect_delay_sec": 0, 00:16:22.202 "fast_io_fail_timeout_sec": 0, 00:16:22.202 "disable_auto_failback": false, 00:16:22.202 "generate_uuids": false, 00:16:22.202 "transport_tos": 0, 00:16:22.202 "nvme_error_stat": false, 00:16:22.202 "rdma_srq_size": 0, 00:16:22.202 "io_path_stat": false, 00:16:22.202 "allow_accel_sequence": false, 00:16:22.202 "rdma_max_cq_size": 0, 00:16:22.202 "rdma_cm_event_timeout_ms": 0, 00:16:22.202 "dhchap_digests": [ 00:16:22.202 "sha256", 00:16:22.202 "sha384", 00:16:22.202 "sha512" 00:16:22.202 ], 00:16:22.202 "dhchap_dhgroups": [ 00:16:22.202 "null", 00:16:22.202 "ffdhe2048", 00:16:22.202 "ffdhe3072", 00:16:22.202 "ffdhe4096", 00:16:22.202 "ffdhe6144", 00:16:22.202 "ffdhe8192" 00:16:22.202 ], 00:16:22.202 "rdma_umr_per_io": false 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "bdev_nvme_set_hotplug", 00:16:22.202 "params": { 00:16:22.202 "period_us": 100000, 00:16:22.202 "enable": false 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "bdev_malloc_create", 00:16:22.202 "params": { 00:16:22.202 "name": "malloc0", 00:16:22.202 "num_blocks": 8192, 00:16:22.202 "block_size": 4096, 00:16:22.202 "physical_block_size": 4096, 00:16:22.202 "uuid": "8478f8f4-1c56-4776-ac88-9c6ced4dea6c", 00:16:22.202 "optimal_io_boundary": 0, 00:16:22.202 "md_size": 0, 00:16:22.202 "dif_type": 0, 00:16:22.202 "dif_is_head_of_md": false, 00:16:22.202 "dif_pi_format": 0 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "bdev_wait_for_examine" 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "scsi", 00:16:22.202 "config": null 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "scheduler", 00:16:22.202 "config": [ 00:16:22.202 { 00:16:22.202 "method": "framework_set_scheduler", 00:16:22.202 "params": { 00:16:22.202 "name": "static" 00:16:22.202 } 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "vhost_scsi", 00:16:22.202 "config": [] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "vhost_blk", 00:16:22.202 "config": [] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "ublk", 00:16:22.202 "config": [ 00:16:22.202 { 00:16:22.202 "method": "ublk_create_target", 00:16:22.202 "params": { 00:16:22.202 "cpumask": "1" 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "ublk_start_disk", 00:16:22.202 "params": { 00:16:22.202 "bdev_name": "malloc0", 00:16:22.202 "ublk_id": 0, 00:16:22.202 "num_queues": 1, 00:16:22.202 "queue_depth": 128 00:16:22.202 } 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "nbd", 00:16:22.202 "config": [] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "nvmf", 00:16:22.202 "config": [ 00:16:22.202 { 00:16:22.202 "method": "nvmf_set_config", 00:16:22.202 "params": { 00:16:22.202 "discovery_filter": "match_any", 00:16:22.202 "admin_cmd_passthru": { 00:16:22.202 "identify_ctrlr": false 00:16:22.202 }, 00:16:22.202 "dhchap_digests": [ 00:16:22.202 "sha256", 00:16:22.202 "sha384", 00:16:22.202 "sha512" 00:16:22.202 ], 00:16:22.202 "dhchap_dhgroups": [ 00:16:22.202 "null", 00:16:22.202 "ffdhe2048", 00:16:22.202 "ffdhe3072", 00:16:22.202 "ffdhe4096", 00:16:22.202 "ffdhe6144", 00:16:22.202 "ffdhe8192" 00:16:22.202 ] 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "nvmf_set_max_subsystems", 00:16:22.202 "params": { 00:16:22.202 "max_subsystems": 1024 00:16:22.202 } 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "method": "nvmf_set_crdt", 00:16:22.202 "params": { 00:16:22.202 "crdt1": 0, 00:16:22.202 "crdt2": 0, 00:16:22.202 "crdt3": 0 00:16:22.202 } 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 }, 00:16:22.202 { 00:16:22.202 "subsystem": "iscsi", 00:16:22.202 "config": [ 00:16:22.202 { 00:16:22.202 "method": "iscsi_set_options", 00:16:22.202 "params": { 00:16:22.202 "node_base": "iqn.2016-06.io.spdk", 00:16:22.202 "max_sessions": 128, 00:16:22.202 "max_connections_per_session": 2, 00:16:22.202 "max_queue_depth": 64, 00:16:22.202 "default_time2wait": 2, 00:16:22.202 "default_time2retain": 20, 00:16:22.202 "first_burst_length": 8192, 00:16:22.202 "immediate_data": true, 00:16:22.202 "allow_duplicated_isid": false, 00:16:22.202 "error_recovery_level": 0, 00:16:22.202 "nop_timeout": 60, 00:16:22.202 "nop_in_interval": 30, 00:16:22.202 "disable_chap": false, 00:16:22.202 "require_chap": false, 00:16:22.202 "mutual_chap": false, 00:16:22.202 "chap_group": 0, 00:16:22.202 "max_large_datain_per_connection": 64, 00:16:22.202 "max_r2t_per_connection": 4, 00:16:22.202 "pdu_pool_size": 36864, 00:16:22.202 "immediate_data_pool_size": 16384, 00:16:22.202 "data_out_pool_size": 2048 00:16:22.202 } 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 } 00:16:22.202 ] 00:16:22.202 }' 00:16:22.202 [2024-12-15 02:11:46.940790] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:22.202 [2024-12-15 02:11:46.941274] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75362 ] 00:16:22.464 [2024-12-15 02:11:47.101493] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:16:22.464 [2024-12-15 02:11:47.225241] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:23.408 [2024-12-15 02:11:48.134219] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:23.408 [2024-12-15 02:11:48.135149] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:23.408 [2024-12-15 02:11:48.142379] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev malloc0 num_queues 1 queue_depth 128 00:16:23.408 [2024-12-15 02:11:48.142474] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 0 00:16:23.408 [2024-12-15 02:11:48.142484] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:23.408 [2024-12-15 02:11:48.142493] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:23.408 [2024-12-15 02:11:48.151326] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:23.408 [2024-12-15 02:11:48.151358] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:23.408 [2024-12-15 02:11:48.158236] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:23.408 [2024-12-15 02:11:48.158359] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:23.669 [2024-12-15 02:11:48.175228] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@868 -- # return 0 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # rpc_cmd ublk_get_disks 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # jq -r '.[0].ublk_device' 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- ublk/ublk.sh@122 -- # [[ /dev/ublkb0 == \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- ublk/ublk.sh@123 -- # [[ -b /dev/ublkb0 ]] 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- ublk/ublk.sh@125 -- # killprocess 75362 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@954 -- # '[' -z 75362 ']' 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@958 -- # kill -0 75362 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # uname 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75362 00:16:23.669 killing process with pid 75362 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75362' 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@973 -- # kill 75362 00:16:23.669 02:11:48 ublk.test_save_ublk_config -- common/autotest_common.sh@978 -- # wait 75362 00:16:25.050 [2024-12-15 02:11:49.401604] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:25.050 [2024-12-15 02:11:49.438284] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:25.050 [2024-12-15 02:11:49.438384] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:25.050 [2024-12-15 02:11:49.446219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:25.050 [2024-12-15 02:11:49.446259] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:25.050 [2024-12-15 02:11:49.446265] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:25.050 [2024-12-15 02:11:49.446284] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:25.050 [2024-12-15 02:11:49.446392] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:25.994 02:11:50 ublk.test_save_ublk_config -- ublk/ublk.sh@126 -- # trap - EXIT 00:16:25.994 ************************************ 00:16:25.994 END TEST test_save_ublk_config 00:16:25.994 ************************************ 00:16:25.994 00:16:25.994 real 0m8.193s 00:16:25.994 user 0m5.750s 00:16:25.994 sys 0m3.136s 00:16:25.994 02:11:50 ublk.test_save_ublk_config -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:25.994 02:11:50 ublk.test_save_ublk_config -- common/autotest_common.sh@10 -- # set +x 00:16:25.994 02:11:50 ublk -- ublk/ublk.sh@139 -- # spdk_pid=75435 00:16:25.994 02:11:50 ublk -- ublk/ublk.sh@140 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:25.994 02:11:50 ublk -- ublk/ublk.sh@141 -- # waitforlisten 75435 00:16:25.994 02:11:50 ublk -- common/autotest_common.sh@835 -- # '[' -z 75435 ']' 00:16:25.994 02:11:50 ublk -- ublk/ublk.sh@138 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:25.994 02:11:50 ublk -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:25.994 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:25.994 02:11:50 ublk -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:25.994 02:11:50 ublk -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:25.994 02:11:50 ublk -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:25.994 02:11:50 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:25.994 [2024-12-15 02:11:50.738108] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:25.994 [2024-12-15 02:11:50.738253] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75435 ] 00:16:26.254 [2024-12-15 02:11:50.895029] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:26.254 [2024-12-15 02:11:50.972444] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:26.254 [2024-12-15 02:11:50.972510] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:26.820 02:11:51 ublk -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:26.820 02:11:51 ublk -- common/autotest_common.sh@868 -- # return 0 00:16:26.820 02:11:51 ublk -- ublk/ublk.sh@143 -- # run_test test_create_ublk test_create_ublk 00:16:26.820 02:11:51 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:26.820 02:11:51 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:26.820 02:11:51 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:26.820 ************************************ 00:16:26.820 START TEST test_create_ublk 00:16:26.820 ************************************ 00:16:26.820 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@1129 -- # test_create_ublk 00:16:26.820 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # rpc_cmd ublk_create_target 00:16:26.820 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:26.820 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.078 [2024-12-15 02:11:51.586213] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:27.078 [2024-12-15 02:11:51.587717] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:27.078 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.078 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@33 -- # ublk_target= 00:16:27.078 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # rpc_cmd bdev_malloc_create 128 4096 00:16:27.078 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.078 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.078 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.078 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@35 -- # malloc_name=Malloc0 00:16:27.078 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:27.078 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.078 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.078 [2024-12-15 02:11:51.747323] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:27.078 [2024-12-15 02:11:51.747625] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:27.078 [2024-12-15 02:11:51.747646] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:27.078 [2024-12-15 02:11:51.747652] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:27.078 [2024-12-15 02:11:51.755237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:27.078 [2024-12-15 02:11:51.755255] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:27.078 [2024-12-15 02:11:51.763231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:27.078 [2024-12-15 02:11:51.763717] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:27.079 [2024-12-15 02:11:51.773292] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:27.079 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.079 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@37 -- # ublk_id=0 00:16:27.079 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@38 -- # ublk_path=/dev/ublkb0 00:16:27.079 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # rpc_cmd ublk_get_disks -n 0 00:16:27.079 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:27.079 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:27.079 02:11:51 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:27.079 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@39 -- # ublk_dev='[ 00:16:27.079 { 00:16:27.079 "ublk_device": "/dev/ublkb0", 00:16:27.079 "id": 0, 00:16:27.079 "queue_depth": 512, 00:16:27.079 "num_queues": 4, 00:16:27.079 "bdev_name": "Malloc0" 00:16:27.079 } 00:16:27.079 ]' 00:16:27.079 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # jq -r '.[0].ublk_device' 00:16:27.079 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@41 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:27.079 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # jq -r '.[0].id' 00:16:27.337 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@42 -- # [[ 0 = \0 ]] 00:16:27.337 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # jq -r '.[0].queue_depth' 00:16:27.337 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@43 -- # [[ 512 = \5\1\2 ]] 00:16:27.337 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # jq -r '.[0].num_queues' 00:16:27.337 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@44 -- # [[ 4 = \4 ]] 00:16:27.337 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # jq -r '.[0].bdev_name' 00:16:27.337 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@45 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:27.337 02:11:51 ublk.test_create_ublk -- ublk/ublk.sh@48 -- # run_fio_test /dev/ublkb0 0 134217728 write 0xcc '--time_based --runtime=10' 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@40 -- # local file=/dev/ublkb0 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@41 -- # local offset=0 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@42 -- # local size=134217728 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@43 -- # local rw=write 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@44 -- # local pattern=0xcc 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@45 -- # local 'extra_params=--time_based --runtime=10' 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@47 -- # local pattern_template= fio_template= 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@48 -- # [[ -n 0xcc ]] 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@49 -- # pattern_template='--do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@52 -- # fio_template='fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0' 00:16:27.337 02:11:51 ublk.test_create_ublk -- lvol/common.sh@53 -- # fio --name=fio_test --filename=/dev/ublkb0 --offset=0 --size=134217728 --rw=write --direct=1 --time_based --runtime=10 --do_verify=1 --verify=pattern --verify_pattern=0xcc --verify_state_save=0 00:16:27.337 fio: verification read phase will never start because write phase uses all of runtime 00:16:27.337 fio_test: (g=0): rw=write, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=psync, iodepth=1 00:16:27.337 fio-3.35 00:16:27.337 Starting 1 process 00:16:39.534 00:16:39.534 fio_test: (groupid=0, jobs=1): err= 0: pid=75481: Sun Dec 15 02:12:02 2024 00:16:39.534 write: IOPS=15.1k, BW=59.1MiB/s (61.9MB/s)(591MiB/10001msec); 0 zone resets 00:16:39.534 clat (usec): min=39, max=4096, avg=65.33, stdev=96.72 00:16:39.534 lat (usec): min=40, max=4097, avg=65.78, stdev=96.73 00:16:39.534 clat percentiles (usec): 00:16:39.534 | 1.00th=[ 50], 5.00th=[ 52], 10.00th=[ 54], 20.00th=[ 57], 00:16:39.534 | 30.00th=[ 59], 40.00th=[ 60], 50.00th=[ 61], 60.00th=[ 63], 00:16:39.534 | 70.00th=[ 64], 80.00th=[ 67], 90.00th=[ 71], 95.00th=[ 75], 00:16:39.534 | 99.00th=[ 84], 99.50th=[ 90], 99.90th=[ 2008], 99.95th=[ 2802], 00:16:39.534 | 99.99th=[ 3490] 00:16:39.534 bw ( KiB/s): min=55112, max=61808, per=100.00%, avg=60516.63, stdev=1495.29, samples=19 00:16:39.534 iops : min=13778, max=15454, avg=15129.16, stdev=373.86, samples=19 00:16:39.534 lat (usec) : 50=1.15%, 100=98.51%, 250=0.14%, 500=0.02%, 750=0.01% 00:16:39.534 lat (usec) : 1000=0.02% 00:16:39.534 lat (msec) : 2=0.06%, 4=0.10%, 10=0.01% 00:16:39.534 cpu : usr=2.41%, sys=13.65%, ctx=151288, majf=0, minf=795 00:16:39.534 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:16:39.534 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.534 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:16:39.534 issued rwts: total=0,151255,0,0 short=0,0,0,0 dropped=0,0,0,0 00:16:39.534 latency : target=0, window=0, percentile=100.00%, depth=1 00:16:39.534 00:16:39.534 Run status group 0 (all jobs): 00:16:39.534 WRITE: bw=59.1MiB/s (61.9MB/s), 59.1MiB/s-59.1MiB/s (61.9MB/s-61.9MB/s), io=591MiB (620MB), run=10001-10001msec 00:16:39.534 00:16:39.534 Disk stats (read/write): 00:16:39.534 ublkb0: ios=0/149713, merge=0/0, ticks=0/8198, in_queue=8199, util=99.09% 00:16:39.534 02:12:02 ublk.test_create_ublk -- ublk/ublk.sh@51 -- # rpc_cmd ublk_stop_disk 0 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 [2024-12-15 02:12:02.182453] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.534 [2024-12-15 02:12:02.225264] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.534 [2024-12-15 02:12:02.225936] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.534 [2024-12-15 02:12:02.233232] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.534 [2024-12-15 02:12:02.233471] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.534 [2024-12-15 02:12:02.233479] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 02:12:02 ublk.test_create_ublk -- ublk/ublk.sh@53 -- # NOT rpc_cmd ublk_stop_disk 0 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@652 -- # local es=0 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@654 -- # valid_exec_arg rpc_cmd ublk_stop_disk 0 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@640 -- # local arg=rpc_cmd 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # type -t rpc_cmd 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@644 -- # case "$(type -t "$arg")" in 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # rpc_cmd ublk_stop_disk 0 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 [2024-12-15 02:12:02.249272] ublk.c:1087:ublk_stop_disk: *ERROR*: no ublk dev with ublk_id=0 00:16:39.534 request: 00:16:39.534 { 00:16:39.534 "ublk_id": 0, 00:16:39.534 "method": "ublk_stop_disk", 00:16:39.534 "req_id": 1 00:16:39.534 } 00:16:39.534 Got JSON-RPC error response 00:16:39.534 response: 00:16:39.534 { 00:16:39.534 "code": -19, 00:16:39.534 "message": "No such device" 00:16:39.534 } 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 1 == 0 ]] 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@655 -- # es=1 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@663 -- # (( es > 128 )) 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@674 -- # [[ -n '' ]] 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@679 -- # (( !es == 0 )) 00:16:39.534 02:12:02 ublk.test_create_ublk -- ublk/ublk.sh@54 -- # rpc_cmd ublk_destroy_target 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 [2024-12-15 02:12:02.265274] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:39.534 [2024-12-15 02:12:02.273216] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:39.534 [2024-12-15 02:12:02.273245] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 02:12:02 ublk.test_create_ublk -- ublk/ublk.sh@56 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 02:12:02 ublk.test_create_ublk -- ublk/ublk.sh@57 -- # check_leftover_devices 00:16:39.534 02:12:02 ublk.test_create_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 02:12:02 ublk.test_create_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:39.534 02:12:02 ublk.test_create_ublk -- lvol/common.sh@26 -- # jq length 00:16:39.534 02:12:02 ublk.test_create_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:39.534 02:12:02 ublk.test_create_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 02:12:02 ublk.test_create_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:39.534 02:12:02 ublk.test_create_ublk -- lvol/common.sh@28 -- # jq length 00:16:39.534 ************************************ 00:16:39.534 END TEST test_create_ublk 00:16:39.534 ************************************ 00:16:39.534 02:12:02 ublk.test_create_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:39.534 00:16:39.534 real 0m11.157s 00:16:39.534 user 0m0.546s 00:16:39.534 sys 0m1.437s 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 02:12:02 ublk -- ublk/ublk.sh@144 -- # run_test test_create_multi_ublk test_create_multi_ublk 00:16:39.534 02:12:02 ublk -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:39.534 02:12:02 ublk -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:39.534 02:12:02 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 ************************************ 00:16:39.534 START TEST test_create_multi_ublk 00:16:39.534 ************************************ 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@1129 -- # test_create_multi_ublk 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # rpc_cmd ublk_create_target 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 [2024-12-15 02:12:02.773215] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:39.534 [2024-12-15 02:12:02.774692] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@62 -- # ublk_target= 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # seq 0 3 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc0 128 4096 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc0 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc0 0 -q 4 -d 512 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.534 02:12:02 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.534 [2024-12-15 02:12:02.989328] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk0: bdev Malloc0 num_queues 4 queue_depth 512 00:16:39.535 [2024-12-15 02:12:02.989624] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc0 via ublk 0 00:16:39.535 [2024-12-15 02:12:02.989635] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk0: add to tailq 00:16:39.535 [2024-12-15 02:12:02.989644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.535 [2024-12-15 02:12:03.013218] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.535 [2024-12-15 02:12:03.013245] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.535 [2024-12-15 02:12:03.025225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.535 [2024-12-15 02:12:03.025719] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV 00:16:39.535 [2024-12-15 02:12:03.065226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=0 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc1 128 4096 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc1 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc1 1 -q 4 -d 512 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 [2024-12-15 02:12:03.278316] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev Malloc1 num_queues 4 queue_depth 512 00:16:39.535 [2024-12-15 02:12:03.278612] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc1 via ublk 1 00:16:39.535 [2024-12-15 02:12:03.278626] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:39.535 [2024-12-15 02:12:03.278631] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.535 [2024-12-15 02:12:03.286230] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.535 [2024-12-15 02:12:03.286247] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.535 [2024-12-15 02:12:03.294217] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.535 [2024-12-15 02:12:03.294700] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:39.535 [2024-12-15 02:12:03.311231] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=1 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc2 128 4096 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc2 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc2 2 -q 4 -d 512 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 [2024-12-15 02:12:03.470298] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk2: bdev Malloc2 num_queues 4 queue_depth 512 00:16:39.535 [2024-12-15 02:12:03.470597] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc2 via ublk 2 00:16:39.535 [2024-12-15 02:12:03.470608] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk2: add to tailq 00:16:39.535 [2024-12-15 02:12:03.470614] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.535 [2024-12-15 02:12:03.478240] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.535 [2024-12-15 02:12:03.478260] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.535 [2024-12-15 02:12:03.486225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.535 [2024-12-15 02:12:03.486720] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV 00:16:39.535 [2024-12-15 02:12:03.493258] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=2 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@64 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # rpc_cmd bdev_malloc_create -b Malloc3 128 4096 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@66 -- # malloc_name=Malloc3 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # rpc_cmd ublk_start_disk Malloc3 3 -q 4 -d 512 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 [2024-12-15 02:12:03.654321] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk3: bdev Malloc3 num_queues 4 queue_depth 512 00:16:39.535 [2024-12-15 02:12:03.654615] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev Malloc3 via ublk 3 00:16:39.535 [2024-12-15 02:12:03.654628] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk3: add to tailq 00:16:39.535 [2024-12-15 02:12:03.654633] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV 00:16:39.535 [2024-12-15 02:12:03.663362] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:39.535 [2024-12-15 02:12:03.663379] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:39.535 [2024-12-15 02:12:03.670237] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:39.535 [2024-12-15 02:12:03.670733] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV 00:16:39.535 [2024-12-15 02:12:03.679253] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_START_DEV completed 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@68 -- # ublk_id=3 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # rpc_cmd ublk_get_disks 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@71 -- # ublk_dev='[ 00:16:39.535 { 00:16:39.535 "ublk_device": "/dev/ublkb0", 00:16:39.535 "id": 0, 00:16:39.535 "queue_depth": 512, 00:16:39.535 "num_queues": 4, 00:16:39.535 "bdev_name": "Malloc0" 00:16:39.535 }, 00:16:39.535 { 00:16:39.535 "ublk_device": "/dev/ublkb1", 00:16:39.535 "id": 1, 00:16:39.535 "queue_depth": 512, 00:16:39.535 "num_queues": 4, 00:16:39.535 "bdev_name": "Malloc1" 00:16:39.535 }, 00:16:39.535 { 00:16:39.535 "ublk_device": "/dev/ublkb2", 00:16:39.535 "id": 2, 00:16:39.535 "queue_depth": 512, 00:16:39.535 "num_queues": 4, 00:16:39.535 "bdev_name": "Malloc2" 00:16:39.535 }, 00:16:39.535 { 00:16:39.535 "ublk_device": "/dev/ublkb3", 00:16:39.535 "id": 3, 00:16:39.535 "queue_depth": 512, 00:16:39.535 "num_queues": 4, 00:16:39.535 "bdev_name": "Malloc3" 00:16:39.535 } 00:16:39.535 ]' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # seq 0 3 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[0].ublk_device' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb0 = \/\d\e\v\/\u\b\l\k\b\0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[0].id' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 0 = \0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[0].queue_depth' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[0].num_queues' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[0].bdev_name' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc0 = \M\a\l\l\o\c\0 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[1].ublk_device' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb1 = \/\d\e\v\/\u\b\l\k\b\1 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[1].id' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 1 = \1 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[1].queue_depth' 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.535 02:12:03 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[1].num_queues' 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[1].bdev_name' 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc1 = \M\a\l\l\o\c\1 ]] 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[2].ublk_device' 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb2 = \/\d\e\v\/\u\b\l\k\b\2 ]] 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[2].id' 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 2 = \2 ]] 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[2].queue_depth' 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[2].num_queues' 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[2].bdev_name' 00:16:39.535 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc2 = \M\a\l\l\o\c\2 ]] 00:16:39.536 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@72 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.536 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # jq -r '.[3].ublk_device' 00:16:39.536 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@74 -- # [[ /dev/ublkb3 = \/\d\e\v\/\u\b\l\k\b\3 ]] 00:16:39.536 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # jq -r '.[3].id' 00:16:39.536 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@75 -- # [[ 3 = \3 ]] 00:16:39.536 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # jq -r '.[3].queue_depth' 00:16:39.536 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@76 -- # [[ 512 = \5\1\2 ]] 00:16:39.536 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # jq -r '.[3].num_queues' 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@77 -- # [[ 4 = \4 ]] 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # jq -r '.[3].bdev_name' 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@78 -- # [[ Malloc3 = \M\a\l\l\o\c\3 ]] 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@84 -- # [[ 1 = \1 ]] 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # seq 0 3 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 0 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.794 [2024-12-15 02:12:04.350308] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.794 [2024-12-15 02:12:04.395257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.794 [2024-12-15 02:12:04.396136] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.794 [2024-12-15 02:12:04.407243] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk0: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.794 [2024-12-15 02:12:04.407489] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk0: remove from tailq 00:16:39.794 [2024-12-15 02:12:04.407503] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 0 stopped 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 1 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.794 [2024-12-15 02:12:04.422288] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.794 [2024-12-15 02:12:04.458257] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.794 [2024-12-15 02:12:04.459057] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.794 [2024-12-15 02:12:04.466233] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.794 [2024-12-15 02:12:04.466483] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:16:39.794 [2024-12-15 02:12:04.466496] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 2 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.794 [2024-12-15 02:12:04.482285] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV 00:16:39.794 [2024-12-15 02:12:04.514222] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:39.794 [2024-12-15 02:12:04.515011] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV 00:16:39.794 [2024-12-15 02:12:04.522229] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk2: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:39.794 [2024-12-15 02:12:04.522471] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk2: remove from tailq 00:16:39.794 [2024-12-15 02:12:04.522483] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 2 stopped 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@85 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@86 -- # rpc_cmd ublk_stop_disk 3 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:39.794 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:39.794 [2024-12-15 02:12:04.538291] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV 00:16:40.053 [2024-12-15 02:12:04.575617] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_STOP_DEV completed 00:16:40.053 [2024-12-15 02:12:04.576644] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV 00:16:40.053 [2024-12-15 02:12:04.586225] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk3: ctrl cmd UBLK_CMD_DEL_DEV completed 00:16:40.053 [2024-12-15 02:12:04.586451] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk3: remove from tailq 00:16:40.053 [2024-12-15 02:12:04.586459] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 3 stopped 00:16:40.053 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.053 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@91 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 120 ublk_destroy_target 00:16:40.053 [2024-12-15 02:12:04.778265] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:40.053 [2024-12-15 02:12:04.786214] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:40.053 [2024-12-15 02:12:04.786240] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:16:40.053 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # seq 0 3 00:16:40.053 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.053 02:12:04 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc0 00:16:40.053 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.053 02:12:04 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.619 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.619 02:12:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.619 02:12:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc1 00:16:40.619 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.619 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:40.877 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:40.877 02:12:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:40.877 02:12:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc2 00:16:40.877 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:40.877 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.135 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.135 02:12:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@93 -- # for i in $(seq 0 $MAX_DEV_ID) 00:16:41.135 02:12:05 ublk.test_create_multi_ublk -- ublk/ublk.sh@94 -- # rpc_cmd bdev_malloc_delete Malloc3 00:16:41.135 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.135 02:12:05 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- ublk/ublk.sh@96 -- # check_leftover_devices 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # rpc_cmd bdev_get_bdevs 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- lvol/common.sh@25 -- # leftover_bdevs='[]' 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # jq length 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- lvol/common.sh@26 -- # '[' 0 == 0 ']' 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # rpc_cmd bdev_lvol_get_lvstores 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- lvol/common.sh@27 -- # leftover_lvs='[]' 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # jq length 00:16:41.701 ************************************ 00:16:41.701 END TEST test_create_multi_ublk 00:16:41.701 ************************************ 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- lvol/common.sh@28 -- # '[' 0 == 0 ']' 00:16:41.701 00:16:41.701 real 0m3.527s 00:16:41.701 user 0m0.807s 00:16:41.701 sys 0m0.150s 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:41.701 02:12:06 ublk.test_create_multi_ublk -- common/autotest_common.sh@10 -- # set +x 00:16:41.701 02:12:06 ublk -- ublk/ublk.sh@146 -- # trap - SIGINT SIGTERM EXIT 00:16:41.701 02:12:06 ublk -- ublk/ublk.sh@147 -- # cleanup 00:16:41.701 02:12:06 ublk -- ublk/ublk.sh@130 -- # killprocess 75435 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@954 -- # '[' -z 75435 ']' 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@958 -- # kill -0 75435 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@959 -- # uname 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75435 00:16:41.701 killing process with pid 75435 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75435' 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@973 -- # kill 75435 00:16:41.701 02:12:06 ublk -- common/autotest_common.sh@978 -- # wait 75435 00:16:42.636 [2024-12-15 02:12:07.165255] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:16:42.636 [2024-12-15 02:12:07.165293] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:16:43.204 00:16:43.204 real 0m25.573s 00:16:43.204 user 0m35.784s 00:16:43.204 sys 0m10.270s 00:16:43.204 ************************************ 00:16:43.204 END TEST ublk 00:16:43.204 ************************************ 00:16:43.204 02:12:07 ublk -- common/autotest_common.sh@1130 -- # xtrace_disable 00:16:43.204 02:12:07 ublk -- common/autotest_common.sh@10 -- # set +x 00:16:43.204 02:12:07 -- spdk/autotest.sh@248 -- # run_test ublk_recovery /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:43.204 02:12:07 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:16:43.204 02:12:07 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:16:43.204 02:12:07 -- common/autotest_common.sh@10 -- # set +x 00:16:43.204 ************************************ 00:16:43.204 START TEST ublk_recovery 00:16:43.204 ************************************ 00:16:43.204 02:12:07 ublk_recovery -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh 00:16:43.204 * Looking for test storage... 00:16:43.204 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ublk 00:16:43.204 02:12:07 ublk_recovery -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:16:43.204 02:12:07 ublk_recovery -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:16:43.204 02:12:07 ublk_recovery -- common/autotest_common.sh@1711 -- # lcov --version 00:16:43.464 02:12:07 ublk_recovery -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@333 -- # local ver1 ver1_l 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@334 -- # local ver2 ver2_l 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@336 -- # IFS=.-: 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@336 -- # read -ra ver1 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@337 -- # IFS=.-: 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@337 -- # read -ra ver2 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@338 -- # local 'op=<' 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@340 -- # ver1_l=2 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@341 -- # ver2_l=1 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@344 -- # case "$op" in 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@345 -- # : 1 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@364 -- # (( v = 0 )) 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@365 -- # decimal 1 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@353 -- # local d=1 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@355 -- # echo 1 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@365 -- # ver1[v]=1 00:16:43.464 02:12:07 ublk_recovery -- scripts/common.sh@366 -- # decimal 2 00:16:43.464 02:12:08 ublk_recovery -- scripts/common.sh@353 -- # local d=2 00:16:43.464 02:12:08 ublk_recovery -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:16:43.464 02:12:08 ublk_recovery -- scripts/common.sh@355 -- # echo 2 00:16:43.464 02:12:08 ublk_recovery -- scripts/common.sh@366 -- # ver2[v]=2 00:16:43.464 02:12:08 ublk_recovery -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:16:43.464 02:12:08 ublk_recovery -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:16:43.464 02:12:08 ublk_recovery -- scripts/common.sh@368 -- # return 0 00:16:43.464 02:12:08 ublk_recovery -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:16:43.464 02:12:08 ublk_recovery -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:16:43.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.464 --rc genhtml_branch_coverage=1 00:16:43.464 --rc genhtml_function_coverage=1 00:16:43.464 --rc genhtml_legend=1 00:16:43.464 --rc geninfo_all_blocks=1 00:16:43.464 --rc geninfo_unexecuted_blocks=1 00:16:43.464 00:16:43.464 ' 00:16:43.464 02:12:08 ublk_recovery -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:16:43.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.464 --rc genhtml_branch_coverage=1 00:16:43.464 --rc genhtml_function_coverage=1 00:16:43.464 --rc genhtml_legend=1 00:16:43.464 --rc geninfo_all_blocks=1 00:16:43.464 --rc geninfo_unexecuted_blocks=1 00:16:43.464 00:16:43.464 ' 00:16:43.464 02:12:08 ublk_recovery -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:16:43.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.464 --rc genhtml_branch_coverage=1 00:16:43.464 --rc genhtml_function_coverage=1 00:16:43.464 --rc genhtml_legend=1 00:16:43.464 --rc geninfo_all_blocks=1 00:16:43.464 --rc geninfo_unexecuted_blocks=1 00:16:43.464 00:16:43.464 ' 00:16:43.464 02:12:08 ublk_recovery -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:16:43.464 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:16:43.464 --rc genhtml_branch_coverage=1 00:16:43.464 --rc genhtml_function_coverage=1 00:16:43.464 --rc genhtml_legend=1 00:16:43.464 --rc geninfo_all_blocks=1 00:16:43.464 --rc geninfo_unexecuted_blocks=1 00:16:43.464 00:16:43.464 ' 00:16:43.464 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/lvol/common.sh 00:16:43.464 02:12:08 ublk_recovery -- lvol/common.sh@6 -- # MALLOC_SIZE_MB=128 00:16:43.464 02:12:08 ublk_recovery -- lvol/common.sh@7 -- # MALLOC_BS=512 00:16:43.464 02:12:08 ublk_recovery -- lvol/common.sh@8 -- # AIO_SIZE_MB=400 00:16:43.464 02:12:08 ublk_recovery -- lvol/common.sh@9 -- # AIO_BS=4096 00:16:43.464 02:12:08 ublk_recovery -- lvol/common.sh@10 -- # LVS_DEFAULT_CLUSTER_SIZE_MB=4 00:16:43.464 02:12:08 ublk_recovery -- lvol/common.sh@11 -- # LVS_DEFAULT_CLUSTER_SIZE=4194304 00:16:43.464 02:12:08 ublk_recovery -- lvol/common.sh@13 -- # LVS_DEFAULT_CAPACITY_MB=124 00:16:43.464 02:12:08 ublk_recovery -- lvol/common.sh@14 -- # LVS_DEFAULT_CAPACITY=130023424 00:16:43.464 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@11 -- # modprobe ublk_drv 00:16:43.464 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:43.464 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@19 -- # spdk_pid=75831 00:16:43.464 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@20 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:43.465 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@21 -- # waitforlisten 75831 00:16:43.465 02:12:08 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75831 ']' 00:16:43.465 02:12:08 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:43.465 02:12:08 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:43.465 02:12:08 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:43.465 02:12:08 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:43.465 02:12:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:43.465 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:43.465 [2024-12-15 02:12:08.088706] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:43.465 [2024-12-15 02:12:08.088834] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75831 ] 00:16:43.723 [2024-12-15 02:12:08.242013] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:43.723 [2024-12-15 02:12:08.327999] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:43.723 [2024-12-15 02:12:08.328067] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:44.290 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@23 -- # rpc_cmd ublk_create_target 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.290 [2024-12-15 02:12:08.920218] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:44.290 [2024-12-15 02:12:08.921770] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.290 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@24 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.290 malloc0 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.290 02:12:08 ublk_recovery -- ublk/ublk_recovery.sh@25 -- # rpc_cmd ublk_start_disk malloc0 1 -q 2 -d 128 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:44.290 02:12:08 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:44.290 [2024-12-15 02:12:09.000481] ublk.c:1924:ublk_start_disk: *DEBUG*: ublk1: bdev malloc0 num_queues 2 queue_depth 128 00:16:44.290 [2024-12-15 02:12:09.000567] ublk.c:1965:ublk_start_disk: *INFO*: Enabling kernel access to bdev malloc0 via ublk 1 00:16:44.290 [2024-12-15 02:12:09.000576] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:44.290 [2024-12-15 02:12:09.000582] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV 00:16:44.290 [2024-12-15 02:12:09.009295] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_ADD_DEV completed 00:16:44.290 [2024-12-15 02:12:09.009313] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS 00:16:44.290 [2024-12-15 02:12:09.016226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_SET_PARAMS completed 00:16:44.290 [2024-12-15 02:12:09.016337] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV 00:16:44.290 [2024-12-15 02:12:09.032226] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_DEV completed 00:16:44.290 1 00:16:44.290 02:12:09 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:44.290 02:12:09 ublk_recovery -- ublk/ublk_recovery.sh@27 -- # sleep 1 00:16:45.664 02:12:10 ublk_recovery -- ublk/ublk_recovery.sh@31 -- # fio_proc=75870 00:16:45.664 02:12:10 ublk_recovery -- ublk/ublk_recovery.sh@33 -- # sleep 5 00:16:45.664 02:12:10 ublk_recovery -- ublk/ublk_recovery.sh@30 -- # taskset -c 2-3 fio --name=fio_test --filename=/dev/ublkb1 --numjobs=1 --iodepth=128 --ioengine=libaio --rw=randrw --direct=1 --time_based --runtime=60 00:16:45.664 fio_test: (g=0): rw=randrw, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=libaio, iodepth=128 00:16:45.664 fio-3.35 00:16:45.664 Starting 1 process 00:16:51.067 02:12:15 ublk_recovery -- ublk/ublk_recovery.sh@36 -- # kill -9 75831 00:16:51.067 02:12:15 ublk_recovery -- ublk/ublk_recovery.sh@38 -- # sleep 5 00:16:56.358 /home/vagrant/spdk_repo/spdk/test/ublk/ublk_recovery.sh: line 38: 75831 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x3 -L ublk 00:16:56.358 02:12:20 ublk_recovery -- ublk/ublk_recovery.sh@41 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x3 -L ublk 00:16:56.358 02:12:20 ublk_recovery -- ublk/ublk_recovery.sh@42 -- # spdk_pid=75982 00:16:56.358 02:12:20 ublk_recovery -- ublk/ublk_recovery.sh@43 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:16:56.358 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:16:56.358 02:12:20 ublk_recovery -- ublk/ublk_recovery.sh@44 -- # waitforlisten 75982 00:16:56.358 02:12:20 ublk_recovery -- common/autotest_common.sh@835 -- # '[' -z 75982 ']' 00:16:56.358 02:12:20 ublk_recovery -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:16:56.358 02:12:20 ublk_recovery -- common/autotest_common.sh@840 -- # local max_retries=100 00:16:56.358 02:12:20 ublk_recovery -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:16:56.358 02:12:20 ublk_recovery -- common/autotest_common.sh@844 -- # xtrace_disable 00:16:56.358 02:12:20 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.358 [2024-12-15 02:12:20.134762] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:16:56.358 [2024-12-15 02:12:20.135501] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x3 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid75982 ] 00:16:56.358 [2024-12-15 02:12:20.296727] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 2 00:16:56.358 [2024-12-15 02:12:20.399404] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:16:56.358 [2024-12-15 02:12:20.399549] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:16:56.358 02:12:21 ublk_recovery -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:16:56.358 02:12:21 ublk_recovery -- common/autotest_common.sh@868 -- # return 0 00:16:56.358 02:12:21 ublk_recovery -- ublk/ublk_recovery.sh@47 -- # rpc_cmd ublk_create_target 00:16:56.358 02:12:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.358 02:12:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.358 [2024-12-15 02:12:21.076224] ublk.c: 572:ublk_ctrl_cmd_get_features: *NOTICE*: User Copy enabled 00:16:56.358 [2024-12-15 02:12:21.078505] ublk.c: 758:ublk_create_target: *NOTICE*: UBLK target created successfully 00:16:56.358 02:12:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.358 02:12:21 ublk_recovery -- ublk/ublk_recovery.sh@48 -- # rpc_cmd bdev_malloc_create -b malloc0 64 4096 00:16:56.358 02:12:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.358 02:12:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.619 malloc0 00:16:56.619 02:12:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.619 02:12:21 ublk_recovery -- ublk/ublk_recovery.sh@49 -- # rpc_cmd ublk_recover_disk malloc0 1 00:16:56.619 02:12:21 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:16:56.619 02:12:21 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:16:56.619 [2024-12-15 02:12:21.196405] ublk.c:2106:ublk_start_disk_recovery: *NOTICE*: Recovering ublk 1 with bdev malloc0 00:16:56.619 [2024-12-15 02:12:21.196457] ublk.c: 971:ublk_dev_list_register: *DEBUG*: ublk1: add to tailq 00:16:56.619 [2024-12-15 02:12:21.196468] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:56.619 [2024-12-15 02:12:21.204281] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:56.619 [2024-12-15 02:12:21.204311] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:56.619 1 00:16:56.619 02:12:21 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:16:56.619 02:12:21 ublk_recovery -- ublk/ublk_recovery.sh@52 -- # wait 75870 00:16:57.560 [2024-12-15 02:12:22.205224] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:57.560 [2024-12-15 02:12:22.213221] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:57.560 [2024-12-15 02:12:22.213238] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:58.494 [2024-12-15 02:12:23.213264] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:58.494 [2024-12-15 02:12:23.217229] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:58.494 [2024-12-15 02:12:23.217245] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:59.869 [2024-12-15 02:12:24.217263] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO 00:16:59.869 [2024-12-15 02:12:24.225219] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_GET_DEV_INFO completed 00:16:59.869 [2024-12-15 02:12:24.225232] ublk.c: 391:ublk_ctrl_process_cqe: *DEBUG*: ublk1: Ublk 1 device state 1 00:16:59.869 [2024-12-15 02:12:24.225239] ublk.c:2035:ublk_ctrl_start_recovery: *DEBUG*: Recovering ublk 1, num queues 2, queue depth 128, flags 0xda 00:16:59.869 [2024-12-15 02:12:24.225305] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY 00:17:21.790 [2024-12-15 02:12:45.544223] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_START_USER_RECOVERY completed 00:17:21.791 [2024-12-15 02:12:45.550767] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY 00:17:21.791 [2024-12-15 02:12:45.558382] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_END_USER_RECOVERY completed 00:17:21.791 [2024-12-15 02:12:45.558399] ublk.c: 413:ublk_ctrl_process_cqe: *NOTICE*: Ublk 1 recover done successfully 00:17:48.326 00:17:48.326 fio_test: (groupid=0, jobs=1): err= 0: pid=75873: Sun Dec 15 02:13:10 2024 00:17:48.326 read: IOPS=14.4k, BW=56.1MiB/s (58.8MB/s)(3366MiB/60003msec) 00:17:48.326 slat (nsec): min=1116, max=96556, avg=4982.91, stdev=1242.81 00:17:48.326 clat (usec): min=632, max=30522k, avg=4377.98, stdev=260960.75 00:17:48.326 lat (usec): min=645, max=30522k, avg=4382.96, stdev=260960.75 00:17:48.326 clat percentiles (usec): 00:17:48.326 | 1.00th=[ 1778], 5.00th=[ 1893], 10.00th=[ 1926], 20.00th=[ 1958], 00:17:48.326 | 30.00th=[ 1975], 40.00th=[ 1991], 50.00th=[ 2008], 60.00th=[ 2024], 00:17:48.326 | 70.00th=[ 2040], 80.00th=[ 2073], 90.00th=[ 2245], 95.00th=[ 3032], 00:17:48.326 | 99.00th=[ 5080], 99.50th=[ 5604], 99.90th=[ 7111], 99.95th=[ 7898], 00:17:48.326 | 99.99th=[13173] 00:17:48.326 bw ( KiB/s): min=24848, max=122360, per=100.00%, avg=114950.78, stdev=16711.36, samples=59 00:17:48.326 iops : min= 6212, max=30590, avg=28737.69, stdev=4177.84, samples=59 00:17:48.326 write: IOPS=14.3k, BW=56.0MiB/s (58.7MB/s)(3361MiB/60003msec); 0 zone resets 00:17:48.326 slat (nsec): min=1130, max=169075, avg=5028.02, stdev=1288.78 00:17:48.326 clat (usec): min=629, max=30522k, avg=4531.07, stdev=265246.45 00:17:48.326 lat (usec): min=634, max=30522k, avg=4536.10, stdev=265246.45 00:17:48.326 clat percentiles (usec): 00:17:48.326 | 1.00th=[ 1827], 5.00th=[ 1991], 10.00th=[ 2024], 20.00th=[ 2040], 00:17:48.326 | 30.00th=[ 2073], 40.00th=[ 2073], 50.00th=[ 2089], 60.00th=[ 2114], 00:17:48.326 | 70.00th=[ 2147], 80.00th=[ 2180], 90.00th=[ 2311], 95.00th=[ 2933], 00:17:48.326 | 99.00th=[ 5145], 99.50th=[ 5669], 99.90th=[ 7111], 99.95th=[ 7963], 00:17:48.326 | 99.99th=[13304] 00:17:48.326 bw ( KiB/s): min=24360, max=122424, per=100.00%, avg=114803.80, stdev=16626.44, samples=59 00:17:48.326 iops : min= 6090, max=30606, avg=28700.95, stdev=4156.61, samples=59 00:17:48.326 lat (usec) : 750=0.01%, 1000=0.01% 00:17:48.326 lat (msec) : 2=26.65%, 4=70.63%, 10=2.68%, 20=0.03%, >=2000=0.01% 00:17:48.326 cpu : usr=3.21%, sys=14.82%, ctx=57082, majf=0, minf=13 00:17:48.326 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=100.0% 00:17:48.326 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:17:48.326 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:17:48.326 issued rwts: total=861591,860448,0,0 short=0,0,0,0 dropped=0,0,0,0 00:17:48.326 latency : target=0, window=0, percentile=100.00%, depth=128 00:17:48.326 00:17:48.326 Run status group 0 (all jobs): 00:17:48.326 READ: bw=56.1MiB/s (58.8MB/s), 56.1MiB/s-56.1MiB/s (58.8MB/s-58.8MB/s), io=3366MiB (3529MB), run=60003-60003msec 00:17:48.326 WRITE: bw=56.0MiB/s (58.7MB/s), 56.0MiB/s-56.0MiB/s (58.7MB/s-58.7MB/s), io=3361MiB (3524MB), run=60003-60003msec 00:17:48.326 00:17:48.326 Disk stats (read/write): 00:17:48.326 ublkb1: ios=858317/857116, merge=0/0, ticks=3720185/3775811, in_queue=7495996, util=99.90% 00:17:48.326 02:13:10 ublk_recovery -- ublk/ublk_recovery.sh@55 -- # rpc_cmd ublk_stop_disk 1 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.326 [2024-12-15 02:13:10.295839] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV 00:17:48.326 [2024-12-15 02:13:10.336233] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_STOP_DEV completed 00:17:48.326 [2024-12-15 02:13:10.336391] ublk.c: 469:ublk_ctrl_cmd_submit: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV 00:17:48.326 [2024-12-15 02:13:10.347216] ublk.c: 349:ublk_ctrl_process_cqe: *DEBUG*: ublk1: ctrl cmd UBLK_CMD_DEL_DEV completed 00:17:48.326 [2024-12-15 02:13:10.347392] ublk.c: 985:ublk_dev_list_unregister: *DEBUG*: ublk1: remove from tailq 00:17:48.326 [2024-12-15 02:13:10.347446] ublk.c:1819:ublk_free_dev: *NOTICE*: ublk dev 1 stopped 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.326 02:13:10 ublk_recovery -- ublk/ublk_recovery.sh@56 -- # rpc_cmd ublk_destroy_target 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@563 -- # xtrace_disable 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.326 [2024-12-15 02:13:10.354293] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:48.326 [2024-12-15 02:13:10.358658] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:48.326 [2024-12-15 02:13:10.358688] ublk_rpc.c: 63:ublk_destroy_target_done: *NOTICE*: ublk target has been destroyed 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@591 -- # [[ 0 == 0 ]] 00:17:48.326 02:13:10 ublk_recovery -- ublk/ublk_recovery.sh@58 -- # trap - SIGINT SIGTERM EXIT 00:17:48.326 02:13:10 ublk_recovery -- ublk/ublk_recovery.sh@59 -- # cleanup 00:17:48.326 02:13:10 ublk_recovery -- ublk/ublk_recovery.sh@14 -- # killprocess 75982 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@954 -- # '[' -z 75982 ']' 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@958 -- # kill -0 75982 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@959 -- # uname 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 75982 00:17:48.326 killing process with pid 75982 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@972 -- # echo 'killing process with pid 75982' 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@973 -- # kill 75982 00:17:48.326 02:13:10 ublk_recovery -- common/autotest_common.sh@978 -- # wait 75982 00:17:48.326 [2024-12-15 02:13:11.412342] ublk.c: 835:_ublk_fini: *DEBUG*: finish shutdown 00:17:48.326 [2024-12-15 02:13:11.412381] ublk.c: 766:_ublk_fini_done: *DEBUG*: 00:17:48.326 00:17:48.326 real 1m4.256s 00:17:48.326 user 1m47.131s 00:17:48.326 sys 0m21.483s 00:17:48.326 02:13:12 ublk_recovery -- common/autotest_common.sh@1130 -- # xtrace_disable 00:17:48.326 02:13:12 ublk_recovery -- common/autotest_common.sh@10 -- # set +x 00:17:48.326 ************************************ 00:17:48.326 END TEST ublk_recovery 00:17:48.326 ************************************ 00:17:48.326 02:13:12 -- spdk/autotest.sh@251 -- # [[ 0 -eq 1 ]] 00:17:48.326 02:13:12 -- spdk/autotest.sh@256 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@260 -- # timing_exit lib 00:17:48.326 02:13:12 -- common/autotest_common.sh@732 -- # xtrace_disable 00:17:48.326 02:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:48.326 02:13:12 -- spdk/autotest.sh@262 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@267 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@276 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@311 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@315 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@319 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@324 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@333 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@338 -- # '[' 0 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@342 -- # '[' 1 -eq 1 ']' 00:17:48.326 02:13:12 -- spdk/autotest.sh@343 -- # run_test ftl /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:48.327 02:13:12 -- common/autotest_common.sh@1105 -- # '[' 2 -le 1 ']' 00:17:48.327 02:13:12 -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:48.327 02:13:12 -- common/autotest_common.sh@10 -- # set +x 00:17:48.327 ************************************ 00:17:48.327 START TEST ftl 00:17:48.327 ************************************ 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:48.327 * Looking for test storage... 00:17:48.327 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1711 -- # lcov --version 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:48.327 02:13:12 ftl -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:48.327 02:13:12 ftl -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:48.327 02:13:12 ftl -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:48.327 02:13:12 ftl -- scripts/common.sh@336 -- # IFS=.-: 00:17:48.327 02:13:12 ftl -- scripts/common.sh@336 -- # read -ra ver1 00:17:48.327 02:13:12 ftl -- scripts/common.sh@337 -- # IFS=.-: 00:17:48.327 02:13:12 ftl -- scripts/common.sh@337 -- # read -ra ver2 00:17:48.327 02:13:12 ftl -- scripts/common.sh@338 -- # local 'op=<' 00:17:48.327 02:13:12 ftl -- scripts/common.sh@340 -- # ver1_l=2 00:17:48.327 02:13:12 ftl -- scripts/common.sh@341 -- # ver2_l=1 00:17:48.327 02:13:12 ftl -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:48.327 02:13:12 ftl -- scripts/common.sh@344 -- # case "$op" in 00:17:48.327 02:13:12 ftl -- scripts/common.sh@345 -- # : 1 00:17:48.327 02:13:12 ftl -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:48.327 02:13:12 ftl -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:48.327 02:13:12 ftl -- scripts/common.sh@365 -- # decimal 1 00:17:48.327 02:13:12 ftl -- scripts/common.sh@353 -- # local d=1 00:17:48.327 02:13:12 ftl -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:48.327 02:13:12 ftl -- scripts/common.sh@355 -- # echo 1 00:17:48.327 02:13:12 ftl -- scripts/common.sh@365 -- # ver1[v]=1 00:17:48.327 02:13:12 ftl -- scripts/common.sh@366 -- # decimal 2 00:17:48.327 02:13:12 ftl -- scripts/common.sh@353 -- # local d=2 00:17:48.327 02:13:12 ftl -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:48.327 02:13:12 ftl -- scripts/common.sh@355 -- # echo 2 00:17:48.327 02:13:12 ftl -- scripts/common.sh@366 -- # ver2[v]=2 00:17:48.327 02:13:12 ftl -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:48.327 02:13:12 ftl -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:48.327 02:13:12 ftl -- scripts/common.sh@368 -- # return 0 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:48.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.327 --rc genhtml_branch_coverage=1 00:17:48.327 --rc genhtml_function_coverage=1 00:17:48.327 --rc genhtml_legend=1 00:17:48.327 --rc geninfo_all_blocks=1 00:17:48.327 --rc geninfo_unexecuted_blocks=1 00:17:48.327 00:17:48.327 ' 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:48.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.327 --rc genhtml_branch_coverage=1 00:17:48.327 --rc genhtml_function_coverage=1 00:17:48.327 --rc genhtml_legend=1 00:17:48.327 --rc geninfo_all_blocks=1 00:17:48.327 --rc geninfo_unexecuted_blocks=1 00:17:48.327 00:17:48.327 ' 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:48.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.327 --rc genhtml_branch_coverage=1 00:17:48.327 --rc genhtml_function_coverage=1 00:17:48.327 --rc genhtml_legend=1 00:17:48.327 --rc geninfo_all_blocks=1 00:17:48.327 --rc geninfo_unexecuted_blocks=1 00:17:48.327 00:17:48.327 ' 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:48.327 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:48.327 --rc genhtml_branch_coverage=1 00:17:48.327 --rc genhtml_function_coverage=1 00:17:48.327 --rc genhtml_legend=1 00:17:48.327 --rc geninfo_all_blocks=1 00:17:48.327 --rc geninfo_unexecuted_blocks=1 00:17:48.327 00:17:48.327 ' 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:48.327 02:13:12 ftl -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/ftl.sh 00:17:48.327 02:13:12 ftl -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:48.327 02:13:12 ftl -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:48.327 02:13:12 ftl -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:48.327 02:13:12 ftl -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:48.327 02:13:12 ftl -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:48.327 02:13:12 ftl -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:48.327 02:13:12 ftl -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:48.327 02:13:12 ftl -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:48.327 02:13:12 ftl -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:48.327 02:13:12 ftl -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:48.327 02:13:12 ftl -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:48.327 02:13:12 ftl -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:48.327 02:13:12 ftl -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:48.327 02:13:12 ftl -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:48.327 02:13:12 ftl -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:48.327 02:13:12 ftl -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:48.327 02:13:12 ftl -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:48.327 02:13:12 ftl -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:48.327 02:13:12 ftl -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:48.327 02:13:12 ftl -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:48.327 02:13:12 ftl -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:48.327 02:13:12 ftl -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:48.327 02:13:12 ftl -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:48.327 02:13:12 ftl -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:48.327 02:13:12 ftl -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:48.327 02:13:12 ftl -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:48.327 02:13:12 ftl -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@31 -- # trap at_ftl_exit SIGINT SIGTERM EXIT 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@34 -- # PCI_ALLOWED= 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@34 -- # PCI_BLOCKED= 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@34 -- # DRIVER_OVERRIDE= 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh 00:17:48.327 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:17:48.327 0000:00:11.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:48.327 0000:00:10.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:48.327 0000:00:12.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:48.327 0000:00:13.0 (1b36 0010): Already using the uio_pci_generic driver 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@36 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt --wait-for-rpc 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@37 -- # spdk_tgt_pid=76788 00:17:48.327 02:13:12 ftl -- ftl/ftl.sh@38 -- # waitforlisten 76788 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@835 -- # '[' -z 76788 ']' 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:48.327 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:48.327 02:13:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:48.327 [2024-12-15 02:13:12.879295] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:48.327 [2024-12-15 02:13:12.879671] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76788 ] 00:17:48.327 [2024-12-15 02:13:13.046218] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:17:48.588 [2024-12-15 02:13:13.166019] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:49.160 02:13:13 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:49.160 02:13:13 ftl -- common/autotest_common.sh@868 -- # return 0 00:17:49.160 02:13:13 ftl -- ftl/ftl.sh@40 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_set_options -d 00:17:49.419 02:13:13 ftl -- ftl/ftl.sh@41 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py framework_start_init 00:17:49.986 02:13:14 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_subsystem_config -j /dev/fd/62 00:17:49.986 02:13:14 ftl -- ftl/ftl.sh@43 -- # /home/vagrant/spdk_repo/spdk/scripts/gen_nvme.sh 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@46 -- # cache_size=1310720 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@47 -- # jq -r '.[] | select(.md_size==64 and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@47 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@47 -- # cache_disks=0000:00:10.0 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@48 -- # for disk in $cache_disks 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@49 -- # nv_cache=0000:00:10.0 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@50 -- # break 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@53 -- # '[' -z 0000:00:10.0 ']' 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@59 -- # base_size=1310720 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs 00:17:50.556 02:13:15 ftl -- ftl/ftl.sh@60 -- # jq -r '.[] | select(.driver_specific.nvme[0].pci_address!="0000:00:10.0" and .zoned == false and .num_blocks >= 1310720).driver_specific.nvme[].pci_address' 00:17:50.816 02:13:15 ftl -- ftl/ftl.sh@60 -- # base_disks=0000:00:11.0 00:17:50.816 02:13:15 ftl -- ftl/ftl.sh@61 -- # for disk in $base_disks 00:17:50.816 02:13:15 ftl -- ftl/ftl.sh@62 -- # device=0000:00:11.0 00:17:50.816 02:13:15 ftl -- ftl/ftl.sh@63 -- # break 00:17:50.816 02:13:15 ftl -- ftl/ftl.sh@66 -- # killprocess 76788 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@954 -- # '[' -z 76788 ']' 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@958 -- # kill -0 76788 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@959 -- # uname 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76788 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:50.817 killing process with pid 76788 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76788' 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@973 -- # kill 76788 00:17:50.817 02:13:15 ftl -- common/autotest_common.sh@978 -- # wait 76788 00:17:52.199 02:13:16 ftl -- ftl/ftl.sh@68 -- # '[' -z 0000:00:11.0 ']' 00:17:52.199 02:13:16 ftl -- ftl/ftl.sh@73 -- # run_test ftl_fio_basic /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:52.199 02:13:16 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:17:52.199 02:13:16 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:17:52.199 02:13:16 ftl -- common/autotest_common.sh@10 -- # set +x 00:17:52.199 ************************************ 00:17:52.199 START TEST ftl_fio_basic 00:17:52.199 ************************************ 00:17:52.199 02:13:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 0000:00:11.0 0000:00:10.0 basic 00:17:52.199 * Looking for test storage... 00:17:52.199 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.199 02:13:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:17:52.199 02:13:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lcov --version 00:17:52.199 02:13:16 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@333 -- # local ver1 ver1_l 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@334 -- # local ver2 ver2_l 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # IFS=.-: 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@336 -- # read -ra ver1 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # IFS=.-: 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@337 -- # read -ra ver2 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@338 -- # local 'op=<' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@340 -- # ver1_l=2 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@341 -- # ver2_l=1 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@344 -- # case "$op" in 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@345 -- # : 1 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v = 0 )) 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # decimal 1 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=1 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 1 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@365 -- # ver1[v]=1 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # decimal 2 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@353 -- # local d=2 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@355 -- # echo 2 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@366 -- # ver2[v]=2 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- scripts/common.sh@368 -- # return 0 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:17:52.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.462 --rc genhtml_branch_coverage=1 00:17:52.462 --rc genhtml_function_coverage=1 00:17:52.462 --rc genhtml_legend=1 00:17:52.462 --rc geninfo_all_blocks=1 00:17:52.462 --rc geninfo_unexecuted_blocks=1 00:17:52.462 00:17:52.462 ' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:17:52.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.462 --rc genhtml_branch_coverage=1 00:17:52.462 --rc genhtml_function_coverage=1 00:17:52.462 --rc genhtml_legend=1 00:17:52.462 --rc geninfo_all_blocks=1 00:17:52.462 --rc geninfo_unexecuted_blocks=1 00:17:52.462 00:17:52.462 ' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:17:52.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.462 --rc genhtml_branch_coverage=1 00:17:52.462 --rc genhtml_function_coverage=1 00:17:52.462 --rc genhtml_legend=1 00:17:52.462 --rc geninfo_all_blocks=1 00:17:52.462 --rc geninfo_unexecuted_blocks=1 00:17:52.462 00:17:52.462 ' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:17:52.462 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:17:52.462 --rc genhtml_branch_coverage=1 00:17:52.462 --rc genhtml_function_coverage=1 00:17:52.462 --rc genhtml_legend=1 00:17:52.462 --rc geninfo_all_blocks=1 00:17:52.462 --rc geninfo_unexecuted_blocks=1 00:17:52.462 00:17:52.462 ' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@23 -- # spdk_ini_pid= 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@11 -- # declare -A suite 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@12 -- # suite['basic']='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@13 -- # suite['extended']='drive-prep randw-verify-qd128-ext randw-verify-qd2048-ext randw randr randrw unmap' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@14 -- # suite['nightly']='drive-prep randw-verify-qd256-nght randw-verify-qd256-nght randw-verify-qd256-nght' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@16 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@23 -- # device=0000:00:11.0 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@24 -- # cache_device=0000:00:10.0 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@25 -- # tests='randw-verify randw-verify-j2 randw-verify-depth128' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@26 -- # uuid= 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@27 -- # timeout=240 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@29 -- # [[ y != y ]] 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@34 -- # '[' -z 'randw-verify randw-verify-j2 randw-verify-depth128' ']' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # export FTL_BDEV_NAME=ftl0 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@39 -- # FTL_BDEV_NAME=ftl0 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@40 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@42 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@45 -- # svcpid=76921 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@46 -- # waitforlisten 76921 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@835 -- # '[' -z 76921 ']' 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:17:52.462 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@840 -- # local max_retries=100 00:17:52.463 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 7 00:17:52.463 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:17:52.463 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:17:52.463 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@844 -- # xtrace_disable 00:17:52.463 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:17:52.463 [2024-12-15 02:13:17.128438] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:17:52.463 [2024-12-15 02:13:17.128763] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid76921 ] 00:17:52.722 [2024-12-15 02:13:17.287593] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:17:52.722 [2024-12-15 02:13:17.375962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:17:52.722 [2024-12-15 02:13:17.376137] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:17:52.722 [2024-12-15 02:13:17.376166] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:17:53.290 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:17:53.290 02:13:17 ftl.ftl_fio_basic -- common/autotest_common.sh@868 -- # return 0 00:17:53.290 02:13:17 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:17:53.290 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@54 -- # local name=nvme0 00:17:53.290 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:17:53.290 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@56 -- # local size=103424 00:17:53.290 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@59 -- # local base_bdev 00:17:53.290 02:13:17 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:17:53.549 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:17:53.549 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@62 -- # local base_size 00:17:53.549 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:17:53.549 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:17:53.549 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:53.549 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:53.549 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:53.549 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:17:53.809 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:53.809 { 00:17:53.809 "name": "nvme0n1", 00:17:53.809 "aliases": [ 00:17:53.809 "701d45f9-18a6-4b04-b0c5-aada7062504a" 00:17:53.809 ], 00:17:53.809 "product_name": "NVMe disk", 00:17:53.809 "block_size": 4096, 00:17:53.809 "num_blocks": 1310720, 00:17:53.809 "uuid": "701d45f9-18a6-4b04-b0c5-aada7062504a", 00:17:53.809 "numa_id": -1, 00:17:53.809 "assigned_rate_limits": { 00:17:53.809 "rw_ios_per_sec": 0, 00:17:53.809 "rw_mbytes_per_sec": 0, 00:17:53.809 "r_mbytes_per_sec": 0, 00:17:53.809 "w_mbytes_per_sec": 0 00:17:53.809 }, 00:17:53.809 "claimed": false, 00:17:53.809 "zoned": false, 00:17:53.809 "supported_io_types": { 00:17:53.809 "read": true, 00:17:53.809 "write": true, 00:17:53.809 "unmap": true, 00:17:53.809 "flush": true, 00:17:53.809 "reset": true, 00:17:53.809 "nvme_admin": true, 00:17:53.809 "nvme_io": true, 00:17:53.809 "nvme_io_md": false, 00:17:53.809 "write_zeroes": true, 00:17:53.809 "zcopy": false, 00:17:53.809 "get_zone_info": false, 00:17:53.809 "zone_management": false, 00:17:53.809 "zone_append": false, 00:17:53.809 "compare": true, 00:17:53.809 "compare_and_write": false, 00:17:53.809 "abort": true, 00:17:53.809 "seek_hole": false, 00:17:53.809 "seek_data": false, 00:17:53.809 "copy": true, 00:17:53.809 "nvme_iov_md": false 00:17:53.809 }, 00:17:53.809 "driver_specific": { 00:17:53.809 "nvme": [ 00:17:53.809 { 00:17:53.809 "pci_address": "0000:00:11.0", 00:17:53.809 "trid": { 00:17:53.809 "trtype": "PCIe", 00:17:53.809 "traddr": "0000:00:11.0" 00:17:53.809 }, 00:17:53.809 "ctrlr_data": { 00:17:53.809 "cntlid": 0, 00:17:53.809 "vendor_id": "0x1b36", 00:17:53.809 "model_number": "QEMU NVMe Ctrl", 00:17:53.809 "serial_number": "12341", 00:17:53.809 "firmware_revision": "8.0.0", 00:17:53.809 "subnqn": "nqn.2019-08.org.qemu:12341", 00:17:53.809 "oacs": { 00:17:53.810 "security": 0, 00:17:53.810 "format": 1, 00:17:53.810 "firmware": 0, 00:17:53.810 "ns_manage": 1 00:17:53.810 }, 00:17:53.810 "multi_ctrlr": false, 00:17:53.810 "ana_reporting": false 00:17:53.810 }, 00:17:53.810 "vs": { 00:17:53.810 "nvme_version": "1.4" 00:17:53.810 }, 00:17:53.810 "ns_data": { 00:17:53.810 "id": 1, 00:17:53.810 "can_share": false 00:17:53.810 } 00:17:53.810 } 00:17:53.810 ], 00:17:53.810 "mp_policy": "active_passive" 00:17:53.810 } 00:17:53.810 } 00:17:53.810 ]' 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=1310720 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 5120 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@63 -- # base_size=5120 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@67 -- # clear_lvols 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:17:53.810 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:17:54.069 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@28 -- # stores= 00:17:54.069 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:17:54.327 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@68 -- # lvs=700edfb0-59ff-4d51-bce4-180057714426 00:17:54.327 02:13:18 ftl.ftl_fio_basic -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 700edfb0-59ff-4d51-bce4-180057714426 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- ftl/fio.sh@48 -- # split_bdev=9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # create_nv_cache_bdev nvc0 0000:00:10.0 9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@35 -- # local name=nvc0 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@37 -- # local base_bdev=9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@38 -- # local cache_size= 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # get_bdev_size 9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:54.585 { 00:17:54.585 "name": "9d365cfb-111f-4cbd-a718-4d51180924ba", 00:17:54.585 "aliases": [ 00:17:54.585 "lvs/nvme0n1p0" 00:17:54.585 ], 00:17:54.585 "product_name": "Logical Volume", 00:17:54.585 "block_size": 4096, 00:17:54.585 "num_blocks": 26476544, 00:17:54.585 "uuid": "9d365cfb-111f-4cbd-a718-4d51180924ba", 00:17:54.585 "assigned_rate_limits": { 00:17:54.585 "rw_ios_per_sec": 0, 00:17:54.585 "rw_mbytes_per_sec": 0, 00:17:54.585 "r_mbytes_per_sec": 0, 00:17:54.585 "w_mbytes_per_sec": 0 00:17:54.585 }, 00:17:54.585 "claimed": false, 00:17:54.585 "zoned": false, 00:17:54.585 "supported_io_types": { 00:17:54.585 "read": true, 00:17:54.585 "write": true, 00:17:54.585 "unmap": true, 00:17:54.585 "flush": false, 00:17:54.585 "reset": true, 00:17:54.585 "nvme_admin": false, 00:17:54.585 "nvme_io": false, 00:17:54.585 "nvme_io_md": false, 00:17:54.585 "write_zeroes": true, 00:17:54.585 "zcopy": false, 00:17:54.585 "get_zone_info": false, 00:17:54.585 "zone_management": false, 00:17:54.585 "zone_append": false, 00:17:54.585 "compare": false, 00:17:54.585 "compare_and_write": false, 00:17:54.585 "abort": false, 00:17:54.585 "seek_hole": true, 00:17:54.585 "seek_data": true, 00:17:54.585 "copy": false, 00:17:54.585 "nvme_iov_md": false 00:17:54.585 }, 00:17:54.585 "driver_specific": { 00:17:54.585 "lvol": { 00:17:54.585 "lvol_store_uuid": "700edfb0-59ff-4d51-bce4-180057714426", 00:17:54.585 "base_bdev": "nvme0n1", 00:17:54.585 "thin_provision": true, 00:17:54.585 "num_allocated_clusters": 0, 00:17:54.585 "snapshot": false, 00:17:54.585 "clone": false, 00:17:54.585 "esnap_clone": false 00:17:54.585 } 00:17:54.585 } 00:17:54.585 } 00:17:54.585 ]' 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:54.585 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@41 -- # local base_size=5171 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@44 -- # local nvc_bdev 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@47 -- # [[ -z '' ]] 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # get_bdev_size 9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:54.843 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:55.101 { 00:17:55.101 "name": "9d365cfb-111f-4cbd-a718-4d51180924ba", 00:17:55.101 "aliases": [ 00:17:55.101 "lvs/nvme0n1p0" 00:17:55.101 ], 00:17:55.101 "product_name": "Logical Volume", 00:17:55.101 "block_size": 4096, 00:17:55.101 "num_blocks": 26476544, 00:17:55.101 "uuid": "9d365cfb-111f-4cbd-a718-4d51180924ba", 00:17:55.101 "assigned_rate_limits": { 00:17:55.101 "rw_ios_per_sec": 0, 00:17:55.101 "rw_mbytes_per_sec": 0, 00:17:55.101 "r_mbytes_per_sec": 0, 00:17:55.101 "w_mbytes_per_sec": 0 00:17:55.101 }, 00:17:55.101 "claimed": false, 00:17:55.101 "zoned": false, 00:17:55.101 "supported_io_types": { 00:17:55.101 "read": true, 00:17:55.101 "write": true, 00:17:55.101 "unmap": true, 00:17:55.101 "flush": false, 00:17:55.101 "reset": true, 00:17:55.101 "nvme_admin": false, 00:17:55.101 "nvme_io": false, 00:17:55.101 "nvme_io_md": false, 00:17:55.101 "write_zeroes": true, 00:17:55.101 "zcopy": false, 00:17:55.101 "get_zone_info": false, 00:17:55.101 "zone_management": false, 00:17:55.101 "zone_append": false, 00:17:55.101 "compare": false, 00:17:55.101 "compare_and_write": false, 00:17:55.101 "abort": false, 00:17:55.101 "seek_hole": true, 00:17:55.101 "seek_data": true, 00:17:55.101 "copy": false, 00:17:55.101 "nvme_iov_md": false 00:17:55.101 }, 00:17:55.101 "driver_specific": { 00:17:55.101 "lvol": { 00:17:55.101 "lvol_store_uuid": "700edfb0-59ff-4d51-bce4-180057714426", 00:17:55.101 "base_bdev": "nvme0n1", 00:17:55.101 "thin_provision": true, 00:17:55.101 "num_allocated_clusters": 0, 00:17:55.101 "snapshot": false, 00:17:55.101 "clone": false, 00:17:55.101 "esnap_clone": false 00:17:55.101 } 00:17:55.101 } 00:17:55.101 } 00:17:55.101 ]' 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@48 -- # cache_size=5171 00:17:55.101 02:13:19 ftl.ftl_fio_basic -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- ftl/fio.sh@49 -- # nv_cache=nvc0n1p0 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- ftl/fio.sh@51 -- # l2p_percentage=60 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- ftl/fio.sh@52 -- # '[' -eq 1 ']' 00:17:55.359 /home/vagrant/spdk_repo/spdk/test/ftl/fio.sh: line 52: [: -eq: unary operator expected 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # get_bdev_size 9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1382 -- # local bdev_name=9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1383 -- # local bdev_info 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1384 -- # local bs 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1385 -- # local nb 00:17:55.359 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 9d365cfb-111f-4cbd-a718-4d51180924ba 00:17:55.616 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:17:55.616 { 00:17:55.616 "name": "9d365cfb-111f-4cbd-a718-4d51180924ba", 00:17:55.616 "aliases": [ 00:17:55.616 "lvs/nvme0n1p0" 00:17:55.617 ], 00:17:55.617 "product_name": "Logical Volume", 00:17:55.617 "block_size": 4096, 00:17:55.617 "num_blocks": 26476544, 00:17:55.617 "uuid": "9d365cfb-111f-4cbd-a718-4d51180924ba", 00:17:55.617 "assigned_rate_limits": { 00:17:55.617 "rw_ios_per_sec": 0, 00:17:55.617 "rw_mbytes_per_sec": 0, 00:17:55.617 "r_mbytes_per_sec": 0, 00:17:55.617 "w_mbytes_per_sec": 0 00:17:55.617 }, 00:17:55.617 "claimed": false, 00:17:55.617 "zoned": false, 00:17:55.617 "supported_io_types": { 00:17:55.617 "read": true, 00:17:55.617 "write": true, 00:17:55.617 "unmap": true, 00:17:55.617 "flush": false, 00:17:55.617 "reset": true, 00:17:55.617 "nvme_admin": false, 00:17:55.617 "nvme_io": false, 00:17:55.617 "nvme_io_md": false, 00:17:55.617 "write_zeroes": true, 00:17:55.617 "zcopy": false, 00:17:55.617 "get_zone_info": false, 00:17:55.617 "zone_management": false, 00:17:55.617 "zone_append": false, 00:17:55.617 "compare": false, 00:17:55.617 "compare_and_write": false, 00:17:55.617 "abort": false, 00:17:55.617 "seek_hole": true, 00:17:55.617 "seek_data": true, 00:17:55.617 "copy": false, 00:17:55.617 "nvme_iov_md": false 00:17:55.617 }, 00:17:55.617 "driver_specific": { 00:17:55.617 "lvol": { 00:17:55.617 "lvol_store_uuid": "700edfb0-59ff-4d51-bce4-180057714426", 00:17:55.617 "base_bdev": "nvme0n1", 00:17:55.617 "thin_provision": true, 00:17:55.617 "num_allocated_clusters": 0, 00:17:55.617 "snapshot": false, 00:17:55.617 "clone": false, 00:17:55.617 "esnap_clone": false 00:17:55.617 } 00:17:55.617 } 00:17:55.617 } 00:17:55.617 ]' 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1387 -- # bs=4096 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1388 -- # nb=26476544 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- common/autotest_common.sh@1392 -- # echo 103424 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- ftl/fio.sh@56 -- # l2p_dram_size_mb=60 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- ftl/fio.sh@58 -- # '[' -z '' ']' 00:17:55.617 02:13:20 ftl.ftl_fio_basic -- ftl/fio.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 9d365cfb-111f-4cbd-a718-4d51180924ba -c nvc0n1p0 --l2p_dram_limit 60 00:17:55.892 [2024-12-15 02:13:20.520526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.892 [2024-12-15 02:13:20.520571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:17:55.892 [2024-12-15 02:13:20.520585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:17:55.892 [2024-12-15 02:13:20.520592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.892 [2024-12-15 02:13:20.520640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.892 [2024-12-15 02:13:20.520650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:55.892 [2024-12-15 02:13:20.520659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:17:55.892 [2024-12-15 02:13:20.520665] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.520703] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:17:55.893 [2024-12-15 02:13:20.524673] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:17:55.893 [2024-12-15 02:13:20.524711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.524718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:55.893 [2024-12-15 02:13:20.524727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.019 ms 00:17:55.893 [2024-12-15 02:13:20.524733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.524796] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4ec20b9f-7692-4efe-bb41-5a54ef1eb44e 00:17:55.893 [2024-12-15 02:13:20.526121] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.526156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:17:55.893 [2024-12-15 02:13:20.526165] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:17:55.893 [2024-12-15 02:13:20.526173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.532968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.532999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:55.893 [2024-12-15 02:13:20.533008] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.727 ms 00:17:55.893 [2024-12-15 02:13:20.533016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.533110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.533121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:55.893 [2024-12-15 02:13:20.533128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:55.893 [2024-12-15 02:13:20.533139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.533179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.533190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:17:55.893 [2024-12-15 02:13:20.533215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:17:55.893 [2024-12-15 02:13:20.533223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.533244] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:17:55.893 [2024-12-15 02:13:20.536457] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.536484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:55.893 [2024-12-15 02:13:20.536495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.215 ms 00:17:55.893 [2024-12-15 02:13:20.536504] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.536541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.536548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:17:55.893 [2024-12-15 02:13:20.536557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:17:55.893 [2024-12-15 02:13:20.536564] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.536598] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:17:55.893 [2024-12-15 02:13:20.536722] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:17:55.893 [2024-12-15 02:13:20.536736] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:17:55.893 [2024-12-15 02:13:20.536745] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:17:55.893 [2024-12-15 02:13:20.536755] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:17:55.893 [2024-12-15 02:13:20.536763] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:17:55.893 [2024-12-15 02:13:20.536771] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:17:55.893 [2024-12-15 02:13:20.536777] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:17:55.893 [2024-12-15 02:13:20.536784] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:17:55.893 [2024-12-15 02:13:20.536790] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:17:55.893 [2024-12-15 02:13:20.536798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.536805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:17:55.893 [2024-12-15 02:13:20.536813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.201 ms 00:17:55.893 [2024-12-15 02:13:20.536819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.536892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.893 [2024-12-15 02:13:20.536898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:17:55.893 [2024-12-15 02:13:20.536905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:17:55.893 [2024-12-15 02:13:20.536911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.893 [2024-12-15 02:13:20.537008] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:17:55.893 [2024-12-15 02:13:20.537016] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:17:55.893 [2024-12-15 02:13:20.537025] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:17:55.893 [2024-12-15 02:13:20.537045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537051] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537057] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:17:55.893 [2024-12-15 02:13:20.537065] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537070] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.893 [2024-12-15 02:13:20.537076] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:17:55.893 [2024-12-15 02:13:20.537082] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:17:55.893 [2024-12-15 02:13:20.537089] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:17:55.893 [2024-12-15 02:13:20.537095] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:17:55.893 [2024-12-15 02:13:20.537119] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:17:55.893 [2024-12-15 02:13:20.537124] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:17:55.893 [2024-12-15 02:13:20.537138] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537144] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537150] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:17:55.893 [2024-12-15 02:13:20.537157] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537162] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537169] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:17:55.893 [2024-12-15 02:13:20.537178] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537190] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:17:55.893 [2024-12-15 02:13:20.537213] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537219] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537226] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:17:55.893 [2024-12-15 02:13:20.537232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:17:55.893 [2024-12-15 02:13:20.537253] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537271] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.893 [2024-12-15 02:13:20.537278] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:17:55.893 [2024-12-15 02:13:20.537283] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:17:55.893 [2024-12-15 02:13:20.537289] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:17:55.893 [2024-12-15 02:13:20.537294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:17:55.893 [2024-12-15 02:13:20.537301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:17:55.893 [2024-12-15 02:13:20.537306] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:17:55.893 [2024-12-15 02:13:20.537318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:17:55.893 [2024-12-15 02:13:20.537325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537330] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:17:55.893 [2024-12-15 02:13:20.537338] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:17:55.893 [2024-12-15 02:13:20.537344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:17:55.893 [2024-12-15 02:13:20.537357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:17:55.893 [2024-12-15 02:13:20.537366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:17:55.893 [2024-12-15 02:13:20.537371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:17:55.893 [2024-12-15 02:13:20.537378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:17:55.893 [2024-12-15 02:13:20.537383] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:17:55.893 [2024-12-15 02:13:20.537389] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:17:55.893 [2024-12-15 02:13:20.537396] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:17:55.893 [2024-12-15 02:13:20.537405] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.893 [2024-12-15 02:13:20.537414] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:17:55.893 [2024-12-15 02:13:20.537421] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:17:55.894 [2024-12-15 02:13:20.537427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:17:55.894 [2024-12-15 02:13:20.537434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:17:55.894 [2024-12-15 02:13:20.537439] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:17:55.894 [2024-12-15 02:13:20.537447] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:17:55.894 [2024-12-15 02:13:20.537453] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:17:55.894 [2024-12-15 02:13:20.537460] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:17:55.894 [2024-12-15 02:13:20.537465] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:17:55.894 [2024-12-15 02:13:20.537474] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:17:55.894 [2024-12-15 02:13:20.537479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:17:55.894 [2024-12-15 02:13:20.537485] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:17:55.894 [2024-12-15 02:13:20.537491] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:17:55.894 [2024-12-15 02:13:20.537498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:17:55.894 [2024-12-15 02:13:20.537503] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:17:55.894 [2024-12-15 02:13:20.537511] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:17:55.894 [2024-12-15 02:13:20.537519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:17:55.894 [2024-12-15 02:13:20.537526] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:17:55.894 [2024-12-15 02:13:20.537532] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:17:55.894 [2024-12-15 02:13:20.537539] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:17:55.894 [2024-12-15 02:13:20.537545] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:55.894 [2024-12-15 02:13:20.537552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:17:55.894 [2024-12-15 02:13:20.537558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:17:55.894 [2024-12-15 02:13:20.537565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:55.894 [2024-12-15 02:13:20.537631] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:17:55.894 [2024-12-15 02:13:20.537643] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:17:58.421 [2024-12-15 02:13:22.991130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.421 [2024-12-15 02:13:22.991185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:17:58.421 [2024-12-15 02:13:22.991210] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2453.490 ms 00:17:58.421 [2024-12-15 02:13:22.991223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.421 [2024-12-15 02:13:23.019129] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.421 [2024-12-15 02:13:23.019330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:58.421 [2024-12-15 02:13:23.019349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.607 ms 00:17:58.421 [2024-12-15 02:13:23.019360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.421 [2024-12-15 02:13:23.019489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.421 [2024-12-15 02:13:23.019503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:17:58.421 [2024-12-15 02:13:23.019512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:17:58.421 [2024-12-15 02:13:23.019524] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.421 [2024-12-15 02:13:23.064519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.421 [2024-12-15 02:13:23.064561] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:58.421 [2024-12-15 02:13:23.064575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.939 ms 00:17:58.421 [2024-12-15 02:13:23.064586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.421 [2024-12-15 02:13:23.064626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.421 [2024-12-15 02:13:23.064638] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:58.422 [2024-12-15 02:13:23.064646] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:58.422 [2024-12-15 02:13:23.064655] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.422 [2024-12-15 02:13:23.065101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.422 [2024-12-15 02:13:23.065131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:58.422 [2024-12-15 02:13:23.065140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.383 ms 00:17:58.422 [2024-12-15 02:13:23.065152] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.422 [2024-12-15 02:13:23.065294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.422 [2024-12-15 02:13:23.065307] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:58.422 [2024-12-15 02:13:23.065316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:58.422 [2024-12-15 02:13:23.065328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.422 [2024-12-15 02:13:23.081340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.422 [2024-12-15 02:13:23.081507] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:58.422 [2024-12-15 02:13:23.081524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.984 ms 00:17:58.422 [2024-12-15 02:13:23.081534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.422 [2024-12-15 02:13:23.093749] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:17:58.422 [2024-12-15 02:13:23.110864] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.422 [2024-12-15 02:13:23.110895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:17:58.422 [2024-12-15 02:13:23.110907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.230 ms 00:17:58.422 [2024-12-15 02:13:23.110917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.422 [2024-12-15 02:13:23.156179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.422 [2024-12-15 02:13:23.156223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:17:58.422 [2024-12-15 02:13:23.156251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.219 ms 00:17:58.422 [2024-12-15 02:13:23.156260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.422 [2024-12-15 02:13:23.156447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.422 [2024-12-15 02:13:23.156463] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:17:58.422 [2024-12-15 02:13:23.156476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.142 ms 00:17:58.422 [2024-12-15 02:13:23.156483] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.422 [2024-12-15 02:13:23.179311] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.422 [2024-12-15 02:13:23.179469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:17:58.422 [2024-12-15 02:13:23.179490] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.773 ms 00:17:58.422 [2024-12-15 02:13:23.179499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-12-15 02:13:23.201920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-12-15 02:13:23.201963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:17:58.680 [2024-12-15 02:13:23.201975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.375 ms 00:17:58.680 [2024-12-15 02:13:23.201983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-12-15 02:13:23.202608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-12-15 02:13:23.202634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:17:58.680 [2024-12-15 02:13:23.202651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.578 ms 00:17:58.680 [2024-12-15 02:13:23.202658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-12-15 02:13:23.274951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-12-15 02:13:23.275099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:17:58.680 [2024-12-15 02:13:23.275125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 72.226 ms 00:17:58.680 [2024-12-15 02:13:23.275136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-12-15 02:13:23.299702] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-12-15 02:13:23.299740] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:17:58.680 [2024-12-15 02:13:23.299755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.218 ms 00:17:58.680 [2024-12-15 02:13:23.299764] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.680 [2024-12-15 02:13:23.322219] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.680 [2024-12-15 02:13:23.322249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:17:58.680 [2024-12-15 02:13:23.322261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.404 ms 00:17:58.680 [2024-12-15 02:13:23.322268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-12-15 02:13:23.345609] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-12-15 02:13:23.345737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:17:58.681 [2024-12-15 02:13:23.345758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.294 ms 00:17:58.681 [2024-12-15 02:13:23.345765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-12-15 02:13:23.345813] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-12-15 02:13:23.345823] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:17:58.681 [2024-12-15 02:13:23.345837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:17:58.681 [2024-12-15 02:13:23.345845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-12-15 02:13:23.345936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:58.681 [2024-12-15 02:13:23.345947] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:17:58.681 [2024-12-15 02:13:23.345957] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:17:58.681 [2024-12-15 02:13:23.345965] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:58.681 [2024-12-15 02:13:23.346942] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2825.966 ms, result 0 00:17:58.681 { 00:17:58.681 "name": "ftl0", 00:17:58.681 "uuid": "4ec20b9f-7692-4efe-bb41-5a54ef1eb44e" 00:17:58.681 } 00:17:58.681 02:13:23 ftl.ftl_fio_basic -- ftl/fio.sh@65 -- # waitforbdev ftl0 00:17:58.681 02:13:23 ftl.ftl_fio_basic -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:17:58.681 02:13:23 ftl.ftl_fio_basic -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:17:58.681 02:13:23 ftl.ftl_fio_basic -- common/autotest_common.sh@905 -- # local i 00:17:58.681 02:13:23 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:17:58.681 02:13:23 ftl.ftl_fio_basic -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:17:58.681 02:13:23 ftl.ftl_fio_basic -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:17:58.939 02:13:23 ftl.ftl_fio_basic -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:17:59.197 [ 00:17:59.197 { 00:17:59.197 "name": "ftl0", 00:17:59.197 "aliases": [ 00:17:59.197 "4ec20b9f-7692-4efe-bb41-5a54ef1eb44e" 00:17:59.197 ], 00:17:59.197 "product_name": "FTL disk", 00:17:59.197 "block_size": 4096, 00:17:59.197 "num_blocks": 20971520, 00:17:59.197 "uuid": "4ec20b9f-7692-4efe-bb41-5a54ef1eb44e", 00:17:59.197 "assigned_rate_limits": { 00:17:59.197 "rw_ios_per_sec": 0, 00:17:59.197 "rw_mbytes_per_sec": 0, 00:17:59.197 "r_mbytes_per_sec": 0, 00:17:59.197 "w_mbytes_per_sec": 0 00:17:59.197 }, 00:17:59.197 "claimed": false, 00:17:59.197 "zoned": false, 00:17:59.197 "supported_io_types": { 00:17:59.197 "read": true, 00:17:59.197 "write": true, 00:17:59.197 "unmap": true, 00:17:59.197 "flush": true, 00:17:59.197 "reset": false, 00:17:59.197 "nvme_admin": false, 00:17:59.197 "nvme_io": false, 00:17:59.197 "nvme_io_md": false, 00:17:59.197 "write_zeroes": true, 00:17:59.197 "zcopy": false, 00:17:59.197 "get_zone_info": false, 00:17:59.197 "zone_management": false, 00:17:59.197 "zone_append": false, 00:17:59.197 "compare": false, 00:17:59.197 "compare_and_write": false, 00:17:59.197 "abort": false, 00:17:59.197 "seek_hole": false, 00:17:59.197 "seek_data": false, 00:17:59.197 "copy": false, 00:17:59.197 "nvme_iov_md": false 00:17:59.197 }, 00:17:59.197 "driver_specific": { 00:17:59.197 "ftl": { 00:17:59.197 "base_bdev": "9d365cfb-111f-4cbd-a718-4d51180924ba", 00:17:59.197 "cache": "nvc0n1p0" 00:17:59.197 } 00:17:59.197 } 00:17:59.197 } 00:17:59.197 ] 00:17:59.197 02:13:23 ftl.ftl_fio_basic -- common/autotest_common.sh@911 -- # return 0 00:17:59.197 02:13:23 ftl.ftl_fio_basic -- ftl/fio.sh@68 -- # echo '{"subsystems": [' 00:17:59.197 02:13:23 ftl.ftl_fio_basic -- ftl/fio.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:17:59.456 02:13:23 ftl.ftl_fio_basic -- ftl/fio.sh@70 -- # echo ']}' 00:17:59.456 02:13:23 ftl.ftl_fio_basic -- ftl/fio.sh@73 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:17:59.456 [2024-12-15 02:13:24.155808] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.155925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:17:59.456 [2024-12-15 02:13:24.155939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:17:59.456 [2024-12-15 02:13:24.155947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.456 [2024-12-15 02:13:24.155982] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:17:59.456 [2024-12-15 02:13:24.158246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.158271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:17:59.456 [2024-12-15 02:13:24.158282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.248 ms 00:17:59.456 [2024-12-15 02:13:24.158289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.456 [2024-12-15 02:13:24.158705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.158723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:17:59.456 [2024-12-15 02:13:24.158733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:17:59.456 [2024-12-15 02:13:24.158739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.456 [2024-12-15 02:13:24.161183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.161210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:17:59.456 [2024-12-15 02:13:24.161220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.415 ms 00:17:59.456 [2024-12-15 02:13:24.161228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.456 [2024-12-15 02:13:24.165843] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.165864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:17:59.456 [2024-12-15 02:13:24.165874] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.593 ms 00:17:59.456 [2024-12-15 02:13:24.165880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.456 [2024-12-15 02:13:24.184051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.184150] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:17:59.456 [2024-12-15 02:13:24.184176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.113 ms 00:17:59.456 [2024-12-15 02:13:24.184183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.456 [2024-12-15 02:13:24.196817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.196912] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:17:59.456 [2024-12-15 02:13:24.196997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.583 ms 00:17:59.456 [2024-12-15 02:13:24.197016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.456 [2024-12-15 02:13:24.197173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.197210] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:17:59.456 [2024-12-15 02:13:24.197230] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.113 ms 00:17:59.456 [2024-12-15 02:13:24.197271] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.456 [2024-12-15 02:13:24.214974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.456 [2024-12-15 02:13:24.215061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:17:59.456 [2024-12-15 02:13:24.215104] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.667 ms 00:17:59.456 [2024-12-15 02:13:24.215121] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.716 [2024-12-15 02:13:24.232361] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.716 [2024-12-15 02:13:24.232445] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:17:59.716 [2024-12-15 02:13:24.232486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.200 ms 00:17:59.716 [2024-12-15 02:13:24.232503] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.716 [2024-12-15 02:13:24.249626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.716 [2024-12-15 02:13:24.249711] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:17:59.716 [2024-12-15 02:13:24.249753] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.083 ms 00:17:59.716 [2024-12-15 02:13:24.249770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.716 [2024-12-15 02:13:24.266605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.716 [2024-12-15 02:13:24.266688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:17:59.716 [2024-12-15 02:13:24.266728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.752 ms 00:17:59.716 [2024-12-15 02:13:24.266745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.716 [2024-12-15 02:13:24.266782] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:17:59.716 [2024-12-15 02:13:24.266804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.266830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.266853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.266877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.266934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.266960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.266982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267042] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267484] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.267989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268106] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268701] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268867] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:17:59.716 [2024-12-15 02:13:24.268889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.268913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269113] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269327] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.269979] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.270001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.270025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.270071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.270099] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:17:59.717 [2024-12-15 02:13:24.270128] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:17:59.717 [2024-12-15 02:13:24.270145] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4ec20b9f-7692-4efe-bb41-5a54ef1eb44e 00:17:59.717 [2024-12-15 02:13:24.270167] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:17:59.717 [2024-12-15 02:13:24.270185] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:17:59.717 [2024-12-15 02:13:24.270237] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:17:59.717 [2024-12-15 02:13:24.270259] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:17:59.717 [2024-12-15 02:13:24.270274] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:17:59.717 [2024-12-15 02:13:24.270291] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:17:59.717 [2024-12-15 02:13:24.270306] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:17:59.717 [2024-12-15 02:13:24.270321] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:17:59.717 [2024-12-15 02:13:24.270335] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:17:59.717 [2024-12-15 02:13:24.270351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.717 [2024-12-15 02:13:24.270390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:17:59.717 [2024-12-15 02:13:24.270405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.570 ms 00:17:59.717 [2024-12-15 02:13:24.270412] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.280577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.717 [2024-12-15 02:13:24.280658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:17:59.717 [2024-12-15 02:13:24.280699] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.127 ms 00:17:59.717 [2024-12-15 02:13:24.280716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.281030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:17:59.717 [2024-12-15 02:13:24.281300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:17:59.717 [2024-12-15 02:13:24.281379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.267 ms 00:17:59.717 [2024-12-15 02:13:24.281399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.317979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.318076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:17:59.717 [2024-12-15 02:13:24.318118] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.318136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.318209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.318227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:17:59.717 [2024-12-15 02:13:24.318245] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.318260] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.318352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.318486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:17:59.717 [2024-12-15 02:13:24.318504] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.318519] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.318557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.318574] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:17:59.717 [2024-12-15 02:13:24.318590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.318645] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.384568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.384608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:17:59.717 [2024-12-15 02:13:24.384618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.384626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.434849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.434884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:17:59.717 [2024-12-15 02:13:24.434896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.434903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.434997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.435006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:17:59.717 [2024-12-15 02:13:24.435017] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.435023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.435082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.435090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:17:59.717 [2024-12-15 02:13:24.435098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.435104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.435194] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.435216] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:17:59.717 [2024-12-15 02:13:24.435224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.435232] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.435283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.435290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:17:59.717 [2024-12-15 02:13:24.435299] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.435305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.435351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.435359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:17:59.717 [2024-12-15 02:13:24.435367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.435373] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.435434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:17:59.717 [2024-12-15 02:13:24.435443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:17:59.717 [2024-12-15 02:13:24.435452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:17:59.717 [2024-12-15 02:13:24.435458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:17:59.717 [2024-12-15 02:13:24.435608] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 279.761 ms, result 0 00:17:59.717 true 00:17:59.718 02:13:24 ftl.ftl_fio_basic -- ftl/fio.sh@75 -- # killprocess 76921 00:17:59.718 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@954 -- # '[' -z 76921 ']' 00:17:59.718 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@958 -- # kill -0 76921 00:17:59.718 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # uname 00:17:59.718 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:17:59.718 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 76921 00:17:59.976 killing process with pid 76921 00:17:59.976 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:17:59.976 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:17:59.976 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@972 -- # echo 'killing process with pid 76921' 00:17:59.976 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@973 -- # kill 76921 00:17:59.976 02:13:24 ftl.ftl_fio_basic -- common/autotest_common.sh@978 -- # wait 76921 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- ftl/fio.sh@76 -- # trap - SIGINT SIGTERM EXIT 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:06.538 02:13:30 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify.fio 00:18:06.538 test: (g=0): rw=randwrite, bs=(R) 68.0KiB-68.0KiB, (W) 68.0KiB-68.0KiB, (T) 68.0KiB-68.0KiB, ioengine=spdk_bdev, iodepth=1 00:18:06.538 fio-3.35 00:18:06.538 Starting 1 thread 00:18:13.125 00:18:13.125 test: (groupid=0, jobs=1): err= 0: pid=77100: Sun Dec 15 02:13:37 2024 00:18:13.125 read: IOPS=687, BW=45.7MiB/s (47.9MB/s)(255MiB/5572msec) 00:18:13.125 slat (nsec): min=4008, max=55754, avg=6512.94, stdev=3204.76 00:18:13.125 clat (usec): min=297, max=7260, avg=674.89, stdev=231.63 00:18:13.125 lat (usec): min=302, max=7271, avg=681.40, stdev=232.30 00:18:13.125 clat percentiles (usec): 00:18:13.125 | 1.00th=[ 322], 5.00th=[ 392], 10.00th=[ 412], 20.00th=[ 486], 00:18:13.125 | 30.00th=[ 537], 40.00th=[ 553], 50.00th=[ 611], 60.00th=[ 807], 00:18:13.125 | 70.00th=[ 832], 80.00th=[ 881], 90.00th=[ 914], 95.00th=[ 979], 00:18:13.125 | 99.00th=[ 1139], 99.50th=[ 1221], 99.90th=[ 1467], 99.95th=[ 1549], 00:18:13.125 | 99.99th=[ 7242] 00:18:13.125 write: IOPS=692, BW=46.0MiB/s (48.2MB/s)(256MiB/5567msec); 0 zone resets 00:18:13.125 slat (usec): min=14, max=123, avg=22.00, stdev= 6.65 00:18:13.125 clat (usec): min=314, max=2071, avg=733.75, stdev=207.67 00:18:13.125 lat (usec): min=347, max=2100, avg=755.75, stdev=209.03 00:18:13.125 clat percentiles (usec): 00:18:13.125 | 1.00th=[ 343], 5.00th=[ 437], 10.00th=[ 498], 20.00th=[ 562], 00:18:13.125 | 30.00th=[ 578], 40.00th=[ 627], 50.00th=[ 652], 60.00th=[ 848], 00:18:13.125 | 70.00th=[ 906], 80.00th=[ 922], 90.00th=[ 988], 95.00th=[ 1029], 00:18:13.125 | 99.00th=[ 1205], 99.50th=[ 1270], 99.90th=[ 1582], 99.95th=[ 1909], 00:18:13.125 | 99.99th=[ 2073] 00:18:13.125 bw ( KiB/s): min=33864, max=68544, per=99.72%, avg=46969.45, stdev=11811.23, samples=11 00:18:13.125 iops : min= 498, max= 1008, avg=690.73, stdev=173.69, samples=11 00:18:13.125 lat (usec) : 500=16.47%, 750=39.35%, 1000=38.76% 00:18:13.125 lat (msec) : 2=5.40%, 4=0.01%, 10=0.01% 00:18:13.125 cpu : usr=99.16%, sys=0.05%, ctx=12, majf=0, minf=1167 00:18:13.125 IO depths : 1=100.0%, 2=0.0%, 4=0.0%, 8=0.0%, 16=0.0%, 32=0.0%, >=64=0.0% 00:18:13.125 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.125 complete : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:13.125 issued rwts: total=3833,3856,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:13.125 latency : target=0, window=0, percentile=100.00%, depth=1 00:18:13.125 00:18:13.125 Run status group 0 (all jobs): 00:18:13.126 READ: bw=45.7MiB/s (47.9MB/s), 45.7MiB/s-45.7MiB/s (47.9MB/s-47.9MB/s), io=255MiB (267MB), run=5572-5572msec 00:18:13.126 WRITE: bw=46.0MiB/s (48.2MB/s), 46.0MiB/s-46.0MiB/s (48.2MB/s-48.2MB/s), io=256MiB (269MB), run=5567-5567msec 00:18:14.066 ----------------------------------------------------- 00:18:14.066 Suppressions used: 00:18:14.066 count bytes template 00:18:14.066 1 5 /usr/src/fio/parse.c 00:18:14.066 1 8 libtcmalloc_minimal.so 00:18:14.066 1 904 libcrypto.so 00:18:14.066 ----------------------------------------------------- 00:18:14.067 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-j2 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:14.067 02:13:38 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-j2.fio 00:18:14.328 first_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:14.328 second_half: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:14.328 fio-3.35 00:18:14.328 Starting 2 threads 00:18:41.031 00:18:41.031 first_half: (groupid=0, jobs=1): err= 0: pid=77225: Sun Dec 15 02:14:03 2024 00:18:41.031 read: IOPS=2831, BW=11.1MiB/s (11.6MB/s)(256MiB/23119msec) 00:18:41.031 slat (nsec): min=3083, max=83831, avg=5303.94, stdev=1360.78 00:18:41.031 clat (usec): min=1333, max=410027, avg=37309.06, stdev=25119.58 00:18:41.031 lat (usec): min=1337, max=410033, avg=37314.36, stdev=25119.71 00:18:41.031 clat percentiles (msec): 00:18:41.031 | 1.00th=[ 10], 5.00th=[ 28], 10.00th=[ 29], 20.00th=[ 31], 00:18:41.031 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 31], 60.00th=[ 34], 00:18:41.031 | 70.00th=[ 36], 80.00th=[ 37], 90.00th=[ 44], 95.00th=[ 68], 00:18:41.031 | 99.00th=[ 146], 99.50th=[ 161], 99.90th=[ 321], 99.95th=[ 376], 00:18:41.031 | 99.99th=[ 401] 00:18:41.031 write: IOPS=2841, BW=11.1MiB/s (11.6MB/s)(256MiB/23061msec); 0 zone resets 00:18:41.031 slat (usec): min=3, max=1301, avg= 6.79, stdev= 8.42 00:18:41.031 clat (usec): min=338, max=68327, avg=7853.47, stdev=9744.03 00:18:41.031 lat (usec): min=345, max=68344, avg=7860.26, stdev=9744.71 00:18:41.031 clat percentiles (usec): 00:18:41.031 | 1.00th=[ 766], 5.00th=[ 914], 10.00th=[ 1205], 20.00th=[ 2442], 00:18:41.031 | 30.00th=[ 2966], 40.00th=[ 3654], 50.00th=[ 4359], 60.00th=[ 5145], 00:18:41.031 | 70.00th=[ 5866], 80.00th=[11469], 90.00th=[21103], 95.00th=[27657], 00:18:41.031 | 99.00th=[55837], 99.50th=[58983], 99.90th=[63701], 99.95th=[65799], 00:18:41.031 | 99.99th=[67634] 00:18:41.031 bw ( KiB/s): min= 336, max=55776, per=91.62%, avg=20830.08, stdev=16987.19, samples=25 00:18:41.031 iops : min= 84, max=13944, avg=5207.52, stdev=4246.80, samples=25 00:18:41.031 lat (usec) : 500=0.04%, 750=0.38%, 1000=3.10% 00:18:41.031 lat (msec) : 2=3.91%, 4=15.27%, 10=17.29%, 20=6.14%, 50=49.66% 00:18:41.031 lat (msec) : 100=2.54%, 250=1.51%, 500=0.15% 00:18:41.031 cpu : usr=99.26%, sys=0.12%, ctx=56, majf=0, minf=5554 00:18:41.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:41.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.031 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:41.031 issued rwts: total=65469,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:41.031 second_half: (groupid=0, jobs=1): err= 0: pid=77226: Sun Dec 15 02:14:03 2024 00:18:41.031 read: IOPS=2863, BW=11.2MiB/s (11.7MB/s)(256MiB/22867msec) 00:18:41.031 slat (nsec): min=3142, max=35867, avg=4379.11, stdev=1222.33 00:18:41.031 clat (msec): min=11, max=374, avg=37.84, stdev=20.83 00:18:41.031 lat (msec): min=11, max=374, avg=37.84, stdev=20.83 00:18:41.031 clat percentiles (msec): 00:18:41.031 | 1.00th=[ 27], 5.00th=[ 28], 10.00th=[ 30], 20.00th=[ 31], 00:18:41.031 | 30.00th=[ 31], 40.00th=[ 31], 50.00th=[ 32], 60.00th=[ 34], 00:18:41.031 | 70.00th=[ 36], 80.00th=[ 39], 90.00th=[ 47], 95.00th=[ 70], 00:18:41.031 | 99.00th=[ 138], 99.50th=[ 146], 99.90th=[ 262], 99.95th=[ 300], 00:18:41.031 | 99.99th=[ 359] 00:18:41.031 write: IOPS=2882, BW=11.3MiB/s (11.8MB/s)(256MiB/22739msec); 0 zone resets 00:18:41.031 slat (usec): min=3, max=1145, avg= 5.92, stdev= 9.58 00:18:41.031 clat (usec): min=344, max=41973, avg=6831.84, stdev=6324.74 00:18:41.031 lat (usec): min=352, max=41978, avg=6837.76, stdev=6326.04 00:18:41.031 clat percentiles (usec): 00:18:41.031 | 1.00th=[ 766], 5.00th=[ 1385], 10.00th=[ 2245], 20.00th=[ 2933], 00:18:41.031 | 30.00th=[ 3556], 40.00th=[ 4146], 50.00th=[ 4752], 60.00th=[ 5211], 00:18:41.031 | 70.00th=[ 5604], 80.00th=[ 8848], 90.00th=[16909], 95.00th=[22152], 00:18:41.031 | 99.00th=[28967], 99.50th=[30016], 99.90th=[37487], 99.95th=[40633], 00:18:41.031 | 99.99th=[41157] 00:18:41.031 bw ( KiB/s): min= 592, max=47800, per=100.00%, avg=23667.27, stdev=14551.68, samples=22 00:18:41.031 iops : min= 148, max=11950, avg=5916.82, stdev=3637.92, samples=22 00:18:41.031 lat (usec) : 500=0.03%, 750=0.38%, 1000=1.15% 00:18:41.031 lat (msec) : 2=2.30%, 4=14.76%, 10=21.94%, 20=6.25%, 50=48.85% 00:18:41.031 lat (msec) : 100=2.82%, 250=1.46%, 500=0.06% 00:18:41.031 cpu : usr=99.31%, sys=0.15%, ctx=46, majf=0, minf=5557 00:18:41.031 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.9% 00:18:41.031 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:41.031 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:41.031 issued rwts: total=65488,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:41.031 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:41.031 00:18:41.031 Run status group 0 (all jobs): 00:18:41.031 READ: bw=22.1MiB/s (23.2MB/s), 11.1MiB/s-11.2MiB/s (11.6MB/s-11.7MB/s), io=512MiB (536MB), run=22867-23119msec 00:18:41.031 WRITE: bw=22.2MiB/s (23.3MB/s), 11.1MiB/s-11.3MiB/s (11.6MB/s-11.8MB/s), io=512MiB (537MB), run=22739-23061msec 00:18:41.031 ----------------------------------------------------- 00:18:41.031 Suppressions used: 00:18:41.031 count bytes template 00:18:41.031 2 10 /usr/src/fio/parse.c 00:18:41.031 2 192 /usr/src/fio/iolog.c 00:18:41.031 1 8 libtcmalloc_minimal.so 00:18:41.031 1 904 libcrypto.so 00:18:41.031 ----------------------------------------------------- 00:18:41.031 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-j2 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- ftl/fio.sh@78 -- # for test in ${tests} 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- ftl/fio.sh@79 -- # timing_enter randw-verify-depth128 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@726 -- # xtrace_disable 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- ftl/fio.sh@80 -- # fio_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1360 -- # fio_plugin /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1341 -- # local fio_dir=/usr/src/fio 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # sanitizers=('libasan' 'libclang_rt.asan') 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1343 -- # local sanitizers 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1344 -- # local plugin=/home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1345 -- # shift 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1347 -- # local asan_lib= 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1348 -- # for sanitizer in "${sanitizers[@]}" 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # ldd /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # awk '{print $3}' 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # grep libasan 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1349 -- # asan_lib=/usr/lib64/libasan.so.8 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1350 -- # [[ -n /usr/lib64/libasan.so.8 ]] 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1351 -- # break 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # LD_PRELOAD='/usr/lib64/libasan.so.8 /home/vagrant/spdk_repo/spdk/build/fio/spdk_bdev' 00:18:41.031 02:14:05 ftl.ftl_fio_basic -- common/autotest_common.sh@1356 -- # /usr/src/fio/fio /home/vagrant/spdk_repo/spdk/test/ftl/config/fio/randw-verify-depth128.fio 00:18:41.031 test: (g=0): rw=randwrite, bs=(R) 4096B-4096B, (W) 4096B-4096B, (T) 4096B-4096B, ioengine=spdk_bdev, iodepth=128 00:18:41.031 fio-3.35 00:18:41.031 Starting 1 thread 00:18:59.149 00:18:59.149 test: (groupid=0, jobs=1): err= 0: pid=77533: Sun Dec 15 02:14:23 2024 00:18:59.149 read: IOPS=7135, BW=27.9MiB/s (29.2MB/s)(255MiB/9138msec) 00:18:59.149 slat (nsec): min=3137, max=58111, avg=5661.83, stdev=2168.00 00:18:59.149 clat (usec): min=808, max=36331, avg=17927.96, stdev=3355.21 00:18:59.149 lat (usec): min=824, max=36342, avg=17933.63, stdev=3356.11 00:18:59.149 clat percentiles (usec): 00:18:59.149 | 1.00th=[14353], 5.00th=[14615], 10.00th=[15664], 20.00th=[15926], 00:18:59.149 | 30.00th=[16057], 40.00th=[16188], 50.00th=[16319], 60.00th=[16581], 00:18:59.149 | 70.00th=[18220], 80.00th=[20317], 90.00th=[22938], 95.00th=[25297], 00:18:59.149 | 99.00th=[29492], 99.50th=[30540], 99.90th=[33817], 99.95th=[34341], 00:18:59.149 | 99.99th=[35390] 00:18:59.149 write: IOPS=8288, BW=32.4MiB/s (33.9MB/s)(256MiB/7907msec); 0 zone resets 00:18:59.149 slat (usec): min=4, max=751, avg= 9.45, stdev= 8.04 00:18:59.149 clat (usec): min=878, max=95752, avg=15366.82, stdev=18533.39 00:18:59.149 lat (usec): min=885, max=95762, avg=15376.27, stdev=18533.41 00:18:59.149 clat percentiles (usec): 00:18:59.149 | 1.00th=[ 1385], 5.00th=[ 1713], 10.00th=[ 1909], 20.00th=[ 2212], 00:18:59.149 | 30.00th=[ 2540], 40.00th=[ 3523], 50.00th=[ 9896], 60.00th=[12125], 00:18:59.149 | 70.00th=[14484], 80.00th=[17433], 90.00th=[54789], 95.00th=[57934], 00:18:59.149 | 99.00th=[61604], 99.50th=[63177], 99.90th=[66847], 99.95th=[81265], 00:18:59.149 | 99.99th=[90702] 00:18:59.149 bw ( KiB/s): min=22976, max=46488, per=98.83%, avg=32768.00, stdev=6111.92, samples=16 00:18:59.149 iops : min= 5744, max=11622, avg=8192.00, stdev=1527.98, samples=16 00:18:59.149 lat (usec) : 1000=0.02% 00:18:59.149 lat (msec) : 2=6.44%, 4=14.11%, 10=4.90%, 20=55.30%, 50=11.74% 00:18:59.149 lat (msec) : 100=7.50% 00:18:59.149 cpu : usr=98.80%, sys=0.25%, ctx=27, majf=0, minf=5563 00:18:59.149 IO depths : 1=0.1%, 2=0.1%, 4=0.1%, 8=0.1%, 16=0.1%, 32=0.1%, >=64=99.8% 00:18:59.149 submit : 0=0.0%, 4=100.0%, 8=0.0%, 16=0.0%, 32=0.0%, 64=0.0%, >=64=0.0% 00:18:59.149 complete : 0=0.0%, 4=100.0%, 8=0.1%, 16=0.1%, 32=0.0%, 64=0.0%, >=64=0.1% 00:18:59.149 issued rwts: total=65202,65536,0,0 short=0,0,0,0 dropped=0,0,0,0 00:18:59.149 latency : target=0, window=0, percentile=100.00%, depth=128 00:18:59.149 00:18:59.149 Run status group 0 (all jobs): 00:18:59.149 READ: bw=27.9MiB/s (29.2MB/s), 27.9MiB/s-27.9MiB/s (29.2MB/s-29.2MB/s), io=255MiB (267MB), run=9138-9138msec 00:18:59.149 WRITE: bw=32.4MiB/s (33.9MB/s), 32.4MiB/s-32.4MiB/s (33.9MB/s-33.9MB/s), io=256MiB (268MB), run=7907-7907msec 00:19:01.075 ----------------------------------------------------- 00:19:01.075 Suppressions used: 00:19:01.075 count bytes template 00:19:01.075 1 5 /usr/src/fio/parse.c 00:19:01.075 2 192 /usr/src/fio/iolog.c 00:19:01.075 1 8 libtcmalloc_minimal.so 00:19:01.075 1 904 libcrypto.so 00:19:01.075 ----------------------------------------------------- 00:19:01.075 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/fio.sh@81 -- # timing_exit randw-verify-depth128 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- common/autotest_common.sh@732 -- # xtrace_disable 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/fio.sh@84 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/fio.sh@85 -- # remove_shm 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:01.075 Remove shared memory files 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/common.sh@205 -- # rm -f rm -f 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/common.sh@206 -- # rm -f rm -f 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid58954 /dev/shm/spdk_tgt_trace.pid75831 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- ftl/common.sh@209 -- # rm -f rm -f 00:19:01.075 ************************************ 00:19:01.075 END TEST ftl_fio_basic 00:19:01.075 ************************************ 00:19:01.075 00:19:01.075 real 1m8.840s 00:19:01.075 user 2m27.427s 00:19:01.075 sys 0m3.091s 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:01.075 02:14:25 ftl.ftl_fio_basic -- common/autotest_common.sh@10 -- # set +x 00:19:01.075 02:14:25 ftl -- ftl/ftl.sh@74 -- # run_test ftl_bdevperf /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:01.075 02:14:25 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:01.075 02:14:25 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:01.075 02:14:25 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:01.075 ************************************ 00:19:01.075 START TEST ftl_bdevperf 00:19:01.075 ************************************ 00:19:01.076 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 0000:00:11.0 0000:00:10.0 00:19:01.338 * Looking for test storage... 00:19:01.338 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lcov --version 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # IFS=.-: 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@336 -- # read -ra ver1 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # IFS=.-: 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@337 -- # read -ra ver2 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@338 -- # local 'op=<' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@340 -- # ver1_l=2 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@341 -- # ver2_l=1 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@344 -- # case "$op" in 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@345 -- # : 1 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # decimal 1 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=1 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 1 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@365 -- # ver1[v]=1 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # decimal 2 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@353 -- # local d=2 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@355 -- # echo 2 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@366 -- # ver2[v]=2 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- scripts/common.sh@368 -- # return 0 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:01.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.338 --rc genhtml_branch_coverage=1 00:19:01.338 --rc genhtml_function_coverage=1 00:19:01.338 --rc genhtml_legend=1 00:19:01.338 --rc geninfo_all_blocks=1 00:19:01.338 --rc geninfo_unexecuted_blocks=1 00:19:01.338 00:19:01.338 ' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:01.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.338 --rc genhtml_branch_coverage=1 00:19:01.338 --rc genhtml_function_coverage=1 00:19:01.338 --rc genhtml_legend=1 00:19:01.338 --rc geninfo_all_blocks=1 00:19:01.338 --rc geninfo_unexecuted_blocks=1 00:19:01.338 00:19:01.338 ' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:01.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.338 --rc genhtml_branch_coverage=1 00:19:01.338 --rc genhtml_function_coverage=1 00:19:01.338 --rc genhtml_legend=1 00:19:01.338 --rc geninfo_all_blocks=1 00:19:01.338 --rc geninfo_unexecuted_blocks=1 00:19:01.338 00:19:01.338 ' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:01.338 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:01.338 --rc genhtml_branch_coverage=1 00:19:01.338 --rc genhtml_function_coverage=1 00:19:01.338 --rc genhtml_legend=1 00:19:01.338 --rc geninfo_all_blocks=1 00:19:01.338 --rc geninfo_unexecuted_blocks=1 00:19:01.338 00:19:01.338 ' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/bdevperf.sh 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@11 -- # device=0000:00:11.0 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@12 -- # cache_device=0000:00:10.0 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@13 -- # use_append= 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@14 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@15 -- # timeout=240 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@18 -- # bdevperf_pid=77813 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@20 -- # trap 'killprocess $bdevperf_pid; exit 1' SIGINT SIGTERM EXIT 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@21 -- # waitforlisten 77813 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@835 -- # '[' -z 77813 ']' 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- ftl/bdevperf.sh@17 -- # /home/vagrant/spdk_repo/spdk/build/examples/bdevperf -z -T ftl0 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:01.338 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:01.338 02:14:25 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:01.338 [2024-12-15 02:14:26.048841] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:01.338 [2024-12-15 02:14:26.049268] [ DPDK EAL parameters: bdevperf --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid77813 ] 00:19:01.599 [2024-12-15 02:14:26.216712] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:01.599 [2024-12-15 02:14:26.357515] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:02.171 02:14:26 ftl.ftl_bdevperf -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:02.171 02:14:26 ftl.ftl_bdevperf -- common/autotest_common.sh@868 -- # return 0 00:19:02.171 02:14:26 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:02.171 02:14:26 ftl.ftl_bdevperf -- ftl/common.sh@54 -- # local name=nvme0 00:19:02.171 02:14:26 ftl.ftl_bdevperf -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:02.171 02:14:26 ftl.ftl_bdevperf -- ftl/common.sh@56 -- # local size=103424 00:19:02.171 02:14:26 ftl.ftl_bdevperf -- ftl/common.sh@59 -- # local base_bdev 00:19:02.171 02:14:26 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@62 -- # local base_size 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:02.745 { 00:19:02.745 "name": "nvme0n1", 00:19:02.745 "aliases": [ 00:19:02.745 "ee35c729-44c4-408a-9614-96546e575767" 00:19:02.745 ], 00:19:02.745 "product_name": "NVMe disk", 00:19:02.745 "block_size": 4096, 00:19:02.745 "num_blocks": 1310720, 00:19:02.745 "uuid": "ee35c729-44c4-408a-9614-96546e575767", 00:19:02.745 "numa_id": -1, 00:19:02.745 "assigned_rate_limits": { 00:19:02.745 "rw_ios_per_sec": 0, 00:19:02.745 "rw_mbytes_per_sec": 0, 00:19:02.745 "r_mbytes_per_sec": 0, 00:19:02.745 "w_mbytes_per_sec": 0 00:19:02.745 }, 00:19:02.745 "claimed": true, 00:19:02.745 "claim_type": "read_many_write_one", 00:19:02.745 "zoned": false, 00:19:02.745 "supported_io_types": { 00:19:02.745 "read": true, 00:19:02.745 "write": true, 00:19:02.745 "unmap": true, 00:19:02.745 "flush": true, 00:19:02.745 "reset": true, 00:19:02.745 "nvme_admin": true, 00:19:02.745 "nvme_io": true, 00:19:02.745 "nvme_io_md": false, 00:19:02.745 "write_zeroes": true, 00:19:02.745 "zcopy": false, 00:19:02.745 "get_zone_info": false, 00:19:02.745 "zone_management": false, 00:19:02.745 "zone_append": false, 00:19:02.745 "compare": true, 00:19:02.745 "compare_and_write": false, 00:19:02.745 "abort": true, 00:19:02.745 "seek_hole": false, 00:19:02.745 "seek_data": false, 00:19:02.745 "copy": true, 00:19:02.745 "nvme_iov_md": false 00:19:02.745 }, 00:19:02.745 "driver_specific": { 00:19:02.745 "nvme": [ 00:19:02.745 { 00:19:02.745 "pci_address": "0000:00:11.0", 00:19:02.745 "trid": { 00:19:02.745 "trtype": "PCIe", 00:19:02.745 "traddr": "0000:00:11.0" 00:19:02.745 }, 00:19:02.745 "ctrlr_data": { 00:19:02.745 "cntlid": 0, 00:19:02.745 "vendor_id": "0x1b36", 00:19:02.745 "model_number": "QEMU NVMe Ctrl", 00:19:02.745 "serial_number": "12341", 00:19:02.745 "firmware_revision": "8.0.0", 00:19:02.745 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:02.745 "oacs": { 00:19:02.745 "security": 0, 00:19:02.745 "format": 1, 00:19:02.745 "firmware": 0, 00:19:02.745 "ns_manage": 1 00:19:02.745 }, 00:19:02.745 "multi_ctrlr": false, 00:19:02.745 "ana_reporting": false 00:19:02.745 }, 00:19:02.745 "vs": { 00:19:02.745 "nvme_version": "1.4" 00:19:02.745 }, 00:19:02.745 "ns_data": { 00:19:02.745 "id": 1, 00:19:02.745 "can_share": false 00:19:02.745 } 00:19:02.745 } 00:19:02.745 ], 00:19:02.745 "mp_policy": "active_passive" 00:19:02.745 } 00:19:02.745 } 00:19:02.745 ]' 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 5120 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@63 -- # base_size=5120 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@67 -- # clear_lvols 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:02.745 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:03.006 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@28 -- # stores=700edfb0-59ff-4d51-bce4-180057714426 00:19:03.006 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@29 -- # for lvs in $stores 00:19:03.006 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 700edfb0-59ff-4d51-bce4-180057714426 00:19:03.266 02:14:27 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:03.527 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@68 -- # lvs=49372170-722d-4132-9262-b2850960d3a1 00:19:03.527 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 49372170-722d-4132-9262-b2850960d3a1 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@22 -- # split_bdev=ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # create_nv_cache_bdev nvc0 0000:00:10.0 ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@35 -- # local name=nvc0 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@37 -- # local base_bdev=ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@38 -- # local cache_size= 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # get_bdev_size ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:03.788 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:04.049 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:04.049 { 00:19:04.049 "name": "ff4cc2b4-c66d-4eaa-9688-80548f63f7ef", 00:19:04.049 "aliases": [ 00:19:04.049 "lvs/nvme0n1p0" 00:19:04.049 ], 00:19:04.049 "product_name": "Logical Volume", 00:19:04.049 "block_size": 4096, 00:19:04.049 "num_blocks": 26476544, 00:19:04.049 "uuid": "ff4cc2b4-c66d-4eaa-9688-80548f63f7ef", 00:19:04.049 "assigned_rate_limits": { 00:19:04.049 "rw_ios_per_sec": 0, 00:19:04.049 "rw_mbytes_per_sec": 0, 00:19:04.049 "r_mbytes_per_sec": 0, 00:19:04.049 "w_mbytes_per_sec": 0 00:19:04.049 }, 00:19:04.049 "claimed": false, 00:19:04.049 "zoned": false, 00:19:04.049 "supported_io_types": { 00:19:04.049 "read": true, 00:19:04.049 "write": true, 00:19:04.049 "unmap": true, 00:19:04.049 "flush": false, 00:19:04.049 "reset": true, 00:19:04.049 "nvme_admin": false, 00:19:04.049 "nvme_io": false, 00:19:04.049 "nvme_io_md": false, 00:19:04.049 "write_zeroes": true, 00:19:04.049 "zcopy": false, 00:19:04.049 "get_zone_info": false, 00:19:04.049 "zone_management": false, 00:19:04.049 "zone_append": false, 00:19:04.049 "compare": false, 00:19:04.049 "compare_and_write": false, 00:19:04.049 "abort": false, 00:19:04.049 "seek_hole": true, 00:19:04.049 "seek_data": true, 00:19:04.049 "copy": false, 00:19:04.049 "nvme_iov_md": false 00:19:04.049 }, 00:19:04.049 "driver_specific": { 00:19:04.049 "lvol": { 00:19:04.049 "lvol_store_uuid": "49372170-722d-4132-9262-b2850960d3a1", 00:19:04.049 "base_bdev": "nvme0n1", 00:19:04.049 "thin_provision": true, 00:19:04.049 "num_allocated_clusters": 0, 00:19:04.049 "snapshot": false, 00:19:04.049 "clone": false, 00:19:04.049 "esnap_clone": false 00:19:04.049 } 00:19:04.049 } 00:19:04.049 } 00:19:04.049 ]' 00:19:04.049 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:04.050 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:04.050 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:04.050 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:04.050 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:04.050 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:04.050 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@41 -- # local base_size=5171 00:19:04.050 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@44 -- # local nvc_bdev 00:19:04.050 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:04.311 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:04.311 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:04.311 02:14:28 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # get_bdev_size ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:04.311 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:04.311 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:04.311 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:04.311 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:04.311 02:14:28 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:04.572 { 00:19:04.572 "name": "ff4cc2b4-c66d-4eaa-9688-80548f63f7ef", 00:19:04.572 "aliases": [ 00:19:04.572 "lvs/nvme0n1p0" 00:19:04.572 ], 00:19:04.572 "product_name": "Logical Volume", 00:19:04.572 "block_size": 4096, 00:19:04.572 "num_blocks": 26476544, 00:19:04.572 "uuid": "ff4cc2b4-c66d-4eaa-9688-80548f63f7ef", 00:19:04.572 "assigned_rate_limits": { 00:19:04.572 "rw_ios_per_sec": 0, 00:19:04.572 "rw_mbytes_per_sec": 0, 00:19:04.572 "r_mbytes_per_sec": 0, 00:19:04.572 "w_mbytes_per_sec": 0 00:19:04.572 }, 00:19:04.572 "claimed": false, 00:19:04.572 "zoned": false, 00:19:04.572 "supported_io_types": { 00:19:04.572 "read": true, 00:19:04.572 "write": true, 00:19:04.572 "unmap": true, 00:19:04.572 "flush": false, 00:19:04.572 "reset": true, 00:19:04.572 "nvme_admin": false, 00:19:04.572 "nvme_io": false, 00:19:04.572 "nvme_io_md": false, 00:19:04.572 "write_zeroes": true, 00:19:04.572 "zcopy": false, 00:19:04.572 "get_zone_info": false, 00:19:04.572 "zone_management": false, 00:19:04.572 "zone_append": false, 00:19:04.572 "compare": false, 00:19:04.572 "compare_and_write": false, 00:19:04.572 "abort": false, 00:19:04.572 "seek_hole": true, 00:19:04.572 "seek_data": true, 00:19:04.572 "copy": false, 00:19:04.572 "nvme_iov_md": false 00:19:04.572 }, 00:19:04.572 "driver_specific": { 00:19:04.572 "lvol": { 00:19:04.572 "lvol_store_uuid": "49372170-722d-4132-9262-b2850960d3a1", 00:19:04.572 "base_bdev": "nvme0n1", 00:19:04.572 "thin_provision": true, 00:19:04.572 "num_allocated_clusters": 0, 00:19:04.572 "snapshot": false, 00:19:04.572 "clone": false, 00:19:04.572 "esnap_clone": false 00:19:04.572 } 00:19:04.572 } 00:19:04.572 } 00:19:04.572 ]' 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- ftl/common.sh@48 -- # cache_size=5171 00:19:04.572 02:14:29 ftl.ftl_bdevperf -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:04.833 02:14:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@23 -- # nv_cache=nvc0n1p0 00:19:04.834 02:14:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # get_bdev_size ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:04.834 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1382 -- # local bdev_name=ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:04.834 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:04.834 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1384 -- # local bs 00:19:04.834 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1385 -- # local nb 00:19:04.834 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ff4cc2b4-c66d-4eaa-9688-80548f63f7ef 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:05.095 { 00:19:05.095 "name": "ff4cc2b4-c66d-4eaa-9688-80548f63f7ef", 00:19:05.095 "aliases": [ 00:19:05.095 "lvs/nvme0n1p0" 00:19:05.095 ], 00:19:05.095 "product_name": "Logical Volume", 00:19:05.095 "block_size": 4096, 00:19:05.095 "num_blocks": 26476544, 00:19:05.095 "uuid": "ff4cc2b4-c66d-4eaa-9688-80548f63f7ef", 00:19:05.095 "assigned_rate_limits": { 00:19:05.095 "rw_ios_per_sec": 0, 00:19:05.095 "rw_mbytes_per_sec": 0, 00:19:05.095 "r_mbytes_per_sec": 0, 00:19:05.095 "w_mbytes_per_sec": 0 00:19:05.095 }, 00:19:05.095 "claimed": false, 00:19:05.095 "zoned": false, 00:19:05.095 "supported_io_types": { 00:19:05.095 "read": true, 00:19:05.095 "write": true, 00:19:05.095 "unmap": true, 00:19:05.095 "flush": false, 00:19:05.095 "reset": true, 00:19:05.095 "nvme_admin": false, 00:19:05.095 "nvme_io": false, 00:19:05.095 "nvme_io_md": false, 00:19:05.095 "write_zeroes": true, 00:19:05.095 "zcopy": false, 00:19:05.095 "get_zone_info": false, 00:19:05.095 "zone_management": false, 00:19:05.095 "zone_append": false, 00:19:05.095 "compare": false, 00:19:05.095 "compare_and_write": false, 00:19:05.095 "abort": false, 00:19:05.095 "seek_hole": true, 00:19:05.095 "seek_data": true, 00:19:05.095 "copy": false, 00:19:05.095 "nvme_iov_md": false 00:19:05.095 }, 00:19:05.095 "driver_specific": { 00:19:05.095 "lvol": { 00:19:05.095 "lvol_store_uuid": "49372170-722d-4132-9262-b2850960d3a1", 00:19:05.095 "base_bdev": "nvme0n1", 00:19:05.095 "thin_provision": true, 00:19:05.095 "num_allocated_clusters": 0, 00:19:05.095 "snapshot": false, 00:19:05.095 "clone": false, 00:19:05.095 "esnap_clone": false 00:19:05.095 } 00:19:05.095 } 00:19:05.095 } 00:19:05.095 ]' 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1387 -- # bs=4096 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- common/autotest_common.sh@1392 -- # echo 103424 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@25 -- # l2p_dram_size_mb=20 00:19:05.095 02:14:29 ftl.ftl_bdevperf -- ftl/bdevperf.sh@26 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d ff4cc2b4-c66d-4eaa-9688-80548f63f7ef -c nvc0n1p0 --l2p_dram_limit 20 00:19:05.356 [2024-12-15 02:14:29.899546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.356 [2024-12-15 02:14:29.899593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:05.356 [2024-12-15 02:14:29.899605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:05.356 [2024-12-15 02:14:29.899613] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.356 [2024-12-15 02:14:29.899654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.356 [2024-12-15 02:14:29.899663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:05.357 [2024-12-15 02:14:29.899670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:19:05.357 [2024-12-15 02:14:29.899677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.899691] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:05.357 [2024-12-15 02:14:29.900257] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:05.357 [2024-12-15 02:14:29.900271] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.900278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:05.357 [2024-12-15 02:14:29.900285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.584 ms 00:19:05.357 [2024-12-15 02:14:29.900310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.900330] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 94242fd7-9a2d-4e9b-a6f3-260b71bf48b3 00:19:05.357 [2024-12-15 02:14:29.901677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.901704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:05.357 [2024-12-15 02:14:29.901716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:05.357 [2024-12-15 02:14:29.901724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.908558] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.908582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:05.357 [2024-12-15 02:14:29.908591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.791 ms 00:19:05.357 [2024-12-15 02:14:29.908600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.908700] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.908708] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:05.357 [2024-12-15 02:14:29.908719] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:05.357 [2024-12-15 02:14:29.908725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.908758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.908766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:05.357 [2024-12-15 02:14:29.908773] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:05.357 [2024-12-15 02:14:29.908779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.908797] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:05.357 [2024-12-15 02:14:29.912011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.912142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:05.357 [2024-12-15 02:14:29.912155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.221 ms 00:19:05.357 [2024-12-15 02:14:29.912168] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.912209] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.912218] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:05.357 [2024-12-15 02:14:29.912224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:05.357 [2024-12-15 02:14:29.912231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.912248] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:05.357 [2024-12-15 02:14:29.912368] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:05.357 [2024-12-15 02:14:29.912379] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:05.357 [2024-12-15 02:14:29.912389] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:05.357 [2024-12-15 02:14:29.912397] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912406] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912412] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:19:05.357 [2024-12-15 02:14:29.912420] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:05.357 [2024-12-15 02:14:29.912425] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:05.357 [2024-12-15 02:14:29.912434] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:05.357 [2024-12-15 02:14:29.912441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.912449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:05.357 [2024-12-15 02:14:29.912456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.194 ms 00:19:05.357 [2024-12-15 02:14:29.912463] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.912527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.357 [2024-12-15 02:14:29.912535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:05.357 [2024-12-15 02:14:29.912541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:19:05.357 [2024-12-15 02:14:29.912550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.357 [2024-12-15 02:14:29.912618] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:05.357 [2024-12-15 02:14:29.912629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:05.357 [2024-12-15 02:14:29.912636] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912643] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:05.357 [2024-12-15 02:14:29.912656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912661] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912668] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:05.357 [2024-12-15 02:14:29.912673] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912679] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:05.357 [2024-12-15 02:14:29.912685] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:05.357 [2024-12-15 02:14:29.912698] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:19:05.357 [2024-12-15 02:14:29.912705] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:05.357 [2024-12-15 02:14:29.912712] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:05.357 [2024-12-15 02:14:29.912718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:19:05.357 [2024-12-15 02:14:29.912726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:05.357 [2024-12-15 02:14:29.912738] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912752] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:05.357 [2024-12-15 02:14:29.912758] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912769] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:05.357 [2024-12-15 02:14:29.912776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912782] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912789] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:05.357 [2024-12-15 02:14:29.912794] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912800] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912806] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:05.357 [2024-12-15 02:14:29.912812] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912826] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:05.357 [2024-12-15 02:14:29.912831] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:05.357 [2024-12-15 02:14:29.912843] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:05.357 [2024-12-15 02:14:29.912849] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:19:05.357 [2024-12-15 02:14:29.912854] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:05.357 [2024-12-15 02:14:29.912861] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:05.357 [2024-12-15 02:14:29.912866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:19:05.357 [2024-12-15 02:14:29.912872] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912878] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:05.357 [2024-12-15 02:14:29.912885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:19:05.357 [2024-12-15 02:14:29.912890] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912896] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:05.357 [2024-12-15 02:14:29.912902] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:05.357 [2024-12-15 02:14:29.912909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:05.357 [2024-12-15 02:14:29.912924] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:05.357 [2024-12-15 02:14:29.912929] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:05.357 [2024-12-15 02:14:29.912935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:05.357 [2024-12-15 02:14:29.912943] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:05.357 [2024-12-15 02:14:29.912950] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:05.357 [2024-12-15 02:14:29.912955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:05.357 [2024-12-15 02:14:29.912963] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:05.357 [2024-12-15 02:14:29.912970] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:05.358 [2024-12-15 02:14:29.912978] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:19:05.358 [2024-12-15 02:14:29.912984] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:19:05.358 [2024-12-15 02:14:29.912992] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:19:05.358 [2024-12-15 02:14:29.912997] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:19:05.358 [2024-12-15 02:14:29.913004] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:19:05.358 [2024-12-15 02:14:29.913010] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:19:05.358 [2024-12-15 02:14:29.913018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:19:05.358 [2024-12-15 02:14:29.913023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:19:05.358 [2024-12-15 02:14:29.913032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:19:05.358 [2024-12-15 02:14:29.913038] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:19:05.358 [2024-12-15 02:14:29.913045] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:19:05.358 [2024-12-15 02:14:29.913050] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:19:05.358 [2024-12-15 02:14:29.913057] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:19:05.358 [2024-12-15 02:14:29.913063] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:19:05.358 [2024-12-15 02:14:29.913070] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:05.358 [2024-12-15 02:14:29.913076] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:05.358 [2024-12-15 02:14:29.913086] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:05.358 [2024-12-15 02:14:29.913092] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:05.358 [2024-12-15 02:14:29.913099] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:05.358 [2024-12-15 02:14:29.913105] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:05.358 [2024-12-15 02:14:29.913112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:05.358 [2024-12-15 02:14:29.913118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:05.358 [2024-12-15 02:14:29.913125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.544 ms 00:19:05.358 [2024-12-15 02:14:29.913131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:05.358 [2024-12-15 02:14:29.913171] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:05.358 [2024-12-15 02:14:29.913189] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:09.563 [2024-12-15 02:14:33.473673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.473857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:09.563 [2024-12-15 02:14:33.473915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3560.489 ms 00:19:09.563 [2024-12-15 02:14:33.473936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.497771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.497911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:09.563 [2024-12-15 02:14:33.497964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.652 ms 00:19:09.563 [2024-12-15 02:14:33.497983] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.498114] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.498137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:09.563 [2024-12-15 02:14:33.498156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:19:09.563 [2024-12-15 02:14:33.498235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.537072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.537235] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:09.563 [2024-12-15 02:14:33.537387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 38.787 ms 00:19:09.563 [2024-12-15 02:14:33.537416] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.537460] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.537478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:09.563 [2024-12-15 02:14:33.537495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:09.563 [2024-12-15 02:14:33.537556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.537999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.538102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:09.563 [2024-12-15 02:14:33.538148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.385 ms 00:19:09.563 [2024-12-15 02:14:33.538166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.538275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.538295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:09.563 [2024-12-15 02:14:33.538317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:19:09.563 [2024-12-15 02:14:33.538334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.550354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.550446] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:09.563 [2024-12-15 02:14:33.550491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.995 ms 00:19:09.563 [2024-12-15 02:14:33.550517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.560500] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 19 (of 20) MiB 00:19:09.563 [2024-12-15 02:14:33.566095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.566181] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:09.563 [2024-12-15 02:14:33.566234] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.512 ms 00:19:09.563 [2024-12-15 02:14:33.566254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.640617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.640728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:09.563 [2024-12-15 02:14:33.640772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 74.335 ms 00:19:09.563 [2024-12-15 02:14:33.640793] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.641111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.641164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:09.563 [2024-12-15 02:14:33.641255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.127 ms 00:19:09.563 [2024-12-15 02:14:33.641281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.659691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.659793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:09.563 [2024-12-15 02:14:33.659837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.358 ms 00:19:09.563 [2024-12-15 02:14:33.659858] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.677824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.677922] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:09.563 [2024-12-15 02:14:33.677973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.808 ms 00:19:09.563 [2024-12-15 02:14:33.677990] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.678499] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.678536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:09.563 [2024-12-15 02:14:33.678553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:19:09.563 [2024-12-15 02:14:33.678601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.742856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.563 [2024-12-15 02:14:33.742955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:09.563 [2024-12-15 02:14:33.742998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.200 ms 00:19:09.563 [2024-12-15 02:14:33.743017] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.563 [2024-12-15 02:14:33.762958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.564 [2024-12-15 02:14:33.763053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:09.564 [2024-12-15 02:14:33.763095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.883 ms 00:19:09.564 [2024-12-15 02:14:33.763114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.564 [2024-12-15 02:14:33.781216] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.564 [2024-12-15 02:14:33.781310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:09.564 [2024-12-15 02:14:33.781351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.054 ms 00:19:09.564 [2024-12-15 02:14:33.781370] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.564 [2024-12-15 02:14:33.800554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.564 [2024-12-15 02:14:33.800648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:09.564 [2024-12-15 02:14:33.800688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.152 ms 00:19:09.564 [2024-12-15 02:14:33.800707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.564 [2024-12-15 02:14:33.800742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.564 [2024-12-15 02:14:33.800764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:09.564 [2024-12-15 02:14:33.800780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:09.564 [2024-12-15 02:14:33.800796] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.564 [2024-12-15 02:14:33.800871] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:09.564 [2024-12-15 02:14:33.800893] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:09.564 [2024-12-15 02:14:33.800910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:19:09.564 [2024-12-15 02:14:33.800974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:09.564 [2024-12-15 02:14:33.802105] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 3902.194 ms, result 0 00:19:09.564 { 00:19:09.564 "name": "ftl0", 00:19:09.564 "uuid": "94242fd7-9a2d-4e9b-a6f3-260b71bf48b3" 00:19:09.564 } 00:19:09.564 02:14:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_stats -b ftl0 00:19:09.564 02:14:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # jq -r .name 00:19:09.564 02:14:33 ftl.ftl_bdevperf -- ftl/bdevperf.sh@28 -- # grep -qw ftl0 00:19:09.564 02:14:34 ftl.ftl_bdevperf -- ftl/bdevperf.sh@30 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 1 -w randwrite -t 4 -o 69632 00:19:09.564 [2024-12-15 02:14:34.117866] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:09.564 I/O size of 69632 is greater than zero copy threshold (65536). 00:19:09.564 Zero copy mechanism will not be used. 00:19:09.564 Running I/O for 4 seconds... 00:19:11.508 718.00 IOPS, 47.68 MiB/s [2024-12-15T02:14:37.215Z] 976.00 IOPS, 64.81 MiB/s [2024-12-15T02:14:38.156Z] 900.33 IOPS, 59.79 MiB/s [2024-12-15T02:14:38.156Z] 864.00 IOPS, 57.38 MiB/s 00:19:13.391 Latency(us) 00:19:13.391 [2024-12-15T02:14:38.156Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:13.391 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 1, IO size: 69632) 00:19:13.391 ftl0 : 4.00 863.84 57.36 0.00 0.00 1220.38 204.80 8368.44 00:19:13.391 [2024-12-15T02:14:38.156Z] =================================================================================================================== 00:19:13.391 [2024-12-15T02:14:38.156Z] Total : 863.84 57.36 0.00 0.00 1220.38 204.80 8368.44 00:19:13.391 [2024-12-15 02:14:38.126735] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:13.391 { 00:19:13.391 "results": [ 00:19:13.391 { 00:19:13.391 "job": "ftl0", 00:19:13.392 "core_mask": "0x1", 00:19:13.392 "workload": "randwrite", 00:19:13.392 "status": "finished", 00:19:13.392 "queue_depth": 1, 00:19:13.392 "io_size": 69632, 00:19:13.392 "runtime": 4.001905, 00:19:13.392 "iops": 863.8385968682416, 00:19:13.392 "mibps": 57.364281823281665, 00:19:13.392 "io_failed": 0, 00:19:13.392 "io_timeout": 0, 00:19:13.392 "avg_latency_us": 1220.3823786742616, 00:19:13.392 "min_latency_us": 204.8, 00:19:13.392 "max_latency_us": 8368.443076923077 00:19:13.392 } 00:19:13.392 ], 00:19:13.392 "core_count": 1 00:19:13.392 } 00:19:13.392 02:14:38 ftl.ftl_bdevperf -- ftl/bdevperf.sh@31 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w randwrite -t 4 -o 4096 00:19:13.654 [2024-12-15 02:14:38.246387] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:13.654 Running I/O for 4 seconds... 00:19:15.540 6157.00 IOPS, 24.05 MiB/s [2024-12-15T02:14:41.692Z] 5444.00 IOPS, 21.27 MiB/s [2024-12-15T02:14:42.266Z] 5123.00 IOPS, 20.01 MiB/s [2024-12-15T02:14:42.527Z] 5036.00 IOPS, 19.67 MiB/s 00:19:17.762 Latency(us) 00:19:17.762 [2024-12-15T02:14:42.527Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:17.762 Job: ftl0 (Core Mask 0x1, workload: randwrite, depth: 128, IO size: 4096) 00:19:17.762 ftl0 : 4.04 5020.29 19.61 0.00 0.00 25378.04 453.71 49807.36 00:19:17.762 [2024-12-15T02:14:42.527Z] =================================================================================================================== 00:19:17.762 [2024-12-15T02:14:42.527Z] Total : 5020.29 19.61 0.00 0.00 25378.04 0.00 49807.36 00:19:17.762 [2024-12-15 02:14:42.294488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ft{ 00:19:17.762 "results": [ 00:19:17.762 { 00:19:17.762 "job": "ftl0", 00:19:17.762 "core_mask": "0x1", 00:19:17.762 "workload": "randwrite", 00:19:17.762 "status": "finished", 00:19:17.762 "queue_depth": 128, 00:19:17.762 "io_size": 4096, 00:19:17.762 "runtime": 4.03801, 00:19:17.762 "iops": 5020.294650087543, 00:19:17.762 "mibps": 19.610525976904466, 00:19:17.762 "io_failed": 0, 00:19:17.762 "io_timeout": 0, 00:19:17.763 "avg_latency_us": 25378.041658672817, 00:19:17.763 "min_latency_us": 453.71076923076924, 00:19:17.763 "max_latency_us": 49807.36 00:19:17.763 } 00:19:17.763 ], 00:19:17.763 "core_count": 1 00:19:17.763 } 00:19:17.763 l0 00:19:17.763 02:14:42 ftl.ftl_bdevperf -- ftl/bdevperf.sh@32 -- # /home/vagrant/spdk_repo/spdk/examples/bdev/bdevperf/bdevperf.py perform_tests -q 128 -w verify -t 4 -o 4096 00:19:17.763 [2024-12-15 02:14:42.408114] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl0 00:19:17.763 Running I/O for 4 seconds... 00:19:20.096 4928.00 IOPS, 19.25 MiB/s [2024-12-15T02:14:45.434Z] 4759.50 IOPS, 18.59 MiB/s [2024-12-15T02:14:46.819Z] 4747.33 IOPS, 18.54 MiB/s [2024-12-15T02:14:46.819Z] 4677.50 IOPS, 18.27 MiB/s 00:19:22.054 Latency(us) 00:19:22.054 [2024-12-15T02:14:46.819Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.054 Job: ftl0 (Core Mask 0x1, workload: verify, depth: 128, IO size: 4096) 00:19:22.054 Verification LBA range: start 0x0 length 0x1400000 00:19:22.054 ftl0 : 4.02 4688.62 18.31 0.00 0.00 27216.69 376.52 39321.60 00:19:22.054 [2024-12-15T02:14:46.819Z] =================================================================================================================== 00:19:22.054 [2024-12-15T02:14:46.819Z] Total : 4688.62 18.31 0.00 0.00 27216.69 0.00 39321.60 00:19:22.054 [2024-12-15 02:14:46.441967] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl0 00:19:22.054 { 00:19:22.054 "results": [ 00:19:22.054 { 00:19:22.054 "job": "ftl0", 00:19:22.054 "core_mask": "0x1", 00:19:22.054 "workload": "verify", 00:19:22.054 "status": "finished", 00:19:22.054 "verify_range": { 00:19:22.054 "start": 0, 00:19:22.054 "length": 20971520 00:19:22.054 }, 00:19:22.054 "queue_depth": 128, 00:19:22.054 "io_size": 4096, 00:19:22.054 "runtime": 4.016959, 00:19:22.054 "iops": 4688.621417345808, 00:19:22.054 "mibps": 18.314927411507064, 00:19:22.054 "io_failed": 0, 00:19:22.054 "io_timeout": 0, 00:19:22.054 "avg_latency_us": 27216.685430114117, 00:19:22.054 "min_latency_us": 376.5169230769231, 00:19:22.054 "max_latency_us": 39321.6 00:19:22.054 } 00:19:22.054 ], 00:19:22.054 "core_count": 1 00:19:22.054 } 00:19:22.054 02:14:46 ftl.ftl_bdevperf -- ftl/bdevperf.sh@34 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_delete -b ftl0 00:19:22.054 [2024-12-15 02:14:46.649400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.054 [2024-12-15 02:14:46.649571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:22.054 [2024-12-15 02:14:46.649595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:22.054 [2024-12-15 02:14:46.649609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.054 [2024-12-15 02:14:46.649641] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:22.054 [2024-12-15 02:14:46.652275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.054 [2024-12-15 02:14:46.652306] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:22.054 [2024-12-15 02:14:46.652324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.611 ms 00:19:22.054 [2024-12-15 02:14:46.652335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.054 [2024-12-15 02:14:46.654805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.054 [2024-12-15 02:14:46.654842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:22.054 [2024-12-15 02:14:46.654860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.439 ms 00:19:22.054 [2024-12-15 02:14:46.654879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.848128] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.848300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:22.316 [2024-12-15 02:14:46.848333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 193.220 ms 00:19:22.316 [2024-12-15 02:14:46.848346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.854562] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.854599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:22.316 [2024-12-15 02:14:46.854618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.168 ms 00:19:22.316 [2024-12-15 02:14:46.854633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.879919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.880079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:22.316 [2024-12-15 02:14:46.880108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.190 ms 00:19:22.316 [2024-12-15 02:14:46.880120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.896336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.896384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:22.316 [2024-12-15 02:14:46.896406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.142 ms 00:19:22.316 [2024-12-15 02:14:46.896418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.896626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.896651] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:22.316 [2024-12-15 02:14:46.896672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.148 ms 00:19:22.316 [2024-12-15 02:14:46.896684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.922343] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.922504] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:22.316 [2024-12-15 02:14:46.922534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.633 ms 00:19:22.316 [2024-12-15 02:14:46.922546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.947407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.947454] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:22.316 [2024-12-15 02:14:46.947475] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.807 ms 00:19:22.316 [2024-12-15 02:14:46.947486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.972049] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.972096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:22.316 [2024-12-15 02:14:46.972117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.502 ms 00:19:22.316 [2024-12-15 02:14:46.972128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.996683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.316 [2024-12-15 02:14:46.996842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:22.316 [2024-12-15 02:14:46.996877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.422 ms 00:19:22.316 [2024-12-15 02:14:46.996888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.316 [2024-12-15 02:14:46.996939] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:22.316 [2024-12-15 02:14:46.996960] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.996978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.996992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997022] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997069] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997100] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997114] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997390] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997834] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997863] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997877] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997937] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.997999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.998015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.998029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.998044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.998057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.998073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:22.316 [2024-12-15 02:14:46.998087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998104] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:22.317 [2024-12-15 02:14:46.998565] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:22.317 [2024-12-15 02:14:46.998582] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 94242fd7-9a2d-4e9b-a6f3-260b71bf48b3 00:19:22.317 [2024-12-15 02:14:46.998599] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:22.317 [2024-12-15 02:14:46.998614] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:22.317 [2024-12-15 02:14:46.998626] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:22.317 [2024-12-15 02:14:46.998642] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:22.317 [2024-12-15 02:14:46.998653] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:22.317 [2024-12-15 02:14:46.998669] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:22.317 [2024-12-15 02:14:46.998683] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:22.317 [2024-12-15 02:14:46.998700] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:22.317 [2024-12-15 02:14:46.998713] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:22.317 [2024-12-15 02:14:46.998730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.317 [2024-12-15 02:14:46.998743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:22.317 [2024-12-15 02:14:46.998760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.793 ms 00:19:22.317 [2024-12-15 02:14:46.998773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.317 [2024-12-15 02:14:47.013518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.317 [2024-12-15 02:14:47.013565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:22.317 [2024-12-15 02:14:47.013585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.686 ms 00:19:22.317 [2024-12-15 02:14:47.013597] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.317 [2024-12-15 02:14:47.014116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:22.317 [2024-12-15 02:14:47.014153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:22.317 [2024-12-15 02:14:47.014170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.462 ms 00:19:22.317 [2024-12-15 02:14:47.014182] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.317 [2024-12-15 02:14:47.052861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.317 [2024-12-15 02:14:47.052907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:22.317 [2024-12-15 02:14:47.052930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.317 [2024-12-15 02:14:47.052941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.317 [2024-12-15 02:14:47.053034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.317 [2024-12-15 02:14:47.053047] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:22.317 [2024-12-15 02:14:47.053064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.317 [2024-12-15 02:14:47.053077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.317 [2024-12-15 02:14:47.053241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.317 [2024-12-15 02:14:47.053261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:22.317 [2024-12-15 02:14:47.053280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.317 [2024-12-15 02:14:47.053293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.317 [2024-12-15 02:14:47.053322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.317 [2024-12-15 02:14:47.053336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:22.317 [2024-12-15 02:14:47.053352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.317 [2024-12-15 02:14:47.053365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.136323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.578 [2024-12-15 02:14:47.136382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:22.578 [2024-12-15 02:14:47.136408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.578 [2024-12-15 02:14:47.136420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.205814] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.578 [2024-12-15 02:14:47.205871] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:22.578 [2024-12-15 02:14:47.205891] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.578 [2024-12-15 02:14:47.205903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.206052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.578 [2024-12-15 02:14:47.206069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:22.578 [2024-12-15 02:14:47.206086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.578 [2024-12-15 02:14:47.206100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.206169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.578 [2024-12-15 02:14:47.206185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:22.578 [2024-12-15 02:14:47.206236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.578 [2024-12-15 02:14:47.206250] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.206398] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.578 [2024-12-15 02:14:47.206417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:22.578 [2024-12-15 02:14:47.206437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.578 [2024-12-15 02:14:47.206450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.206502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.578 [2024-12-15 02:14:47.206517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:22.578 [2024-12-15 02:14:47.206534] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.578 [2024-12-15 02:14:47.206546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.206603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.578 [2024-12-15 02:14:47.206621] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:22.578 [2024-12-15 02:14:47.206637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.578 [2024-12-15 02:14:47.206659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.206730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:22.578 [2024-12-15 02:14:47.206746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:22.578 [2024-12-15 02:14:47.206762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:22.578 [2024-12-15 02:14:47.206774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:22.578 [2024-12-15 02:14:47.206972] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 557.489 ms, result 0 00:19:22.578 true 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- ftl/bdevperf.sh@36 -- # killprocess 77813 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@954 -- # '[' -z 77813 ']' 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@958 -- # kill -0 77813 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # uname 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 77813 00:19:22.578 killing process with pid 77813 00:19:22.578 Received shutdown signal, test time was about 4.000000 seconds 00:19:22.578 00:19:22.578 Latency(us) 00:19:22.578 [2024-12-15T02:14:47.343Z] Device Information : runtime(s) IOPS MiB/s Fail/s TO/s Average min max 00:19:22.578 [2024-12-15T02:14:47.343Z] =================================================================================================================== 00:19:22.578 [2024-12-15T02:14:47.343Z] Total : 0.00 0.00 0.00 0.00 0.00 0.00 0.00 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@972 -- # echo 'killing process with pid 77813' 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@973 -- # kill 77813 00:19:22.578 02:14:47 ftl.ftl_bdevperf -- common/autotest_common.sh@978 -- # wait 77813 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@37 -- # trap - SIGINT SIGTERM EXIT 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- ftl/bdevperf.sh@39 -- # remove_shm 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- ftl/common.sh@204 -- # echo Remove shared memory files 00:19:23.522 Remove shared memory files 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- ftl/common.sh@205 -- # rm -f rm -f 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- ftl/common.sh@206 -- # rm -f rm -f 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- ftl/common.sh@207 -- # rm -f rm -f 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- ftl/common.sh@209 -- # rm -f rm -f 00:19:23.522 ************************************ 00:19:23.522 END TEST ftl_bdevperf 00:19:23.522 ************************************ 00:19:23.522 00:19:23.522 real 0m22.403s 00:19:23.522 user 0m25.019s 00:19:23.522 sys 0m0.982s 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- common/autotest_common.sh@1130 -- # xtrace_disable 00:19:23.522 02:14:48 ftl.ftl_bdevperf -- common/autotest_common.sh@10 -- # set +x 00:19:23.522 02:14:48 ftl -- ftl/ftl.sh@75 -- # run_test ftl_trim /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:23.522 02:14:48 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:19:23.522 02:14:48 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:19:23.522 02:14:48 ftl -- common/autotest_common.sh@10 -- # set +x 00:19:23.522 ************************************ 00:19:23.522 START TEST ftl_trim 00:19:23.522 ************************************ 00:19:23.522 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 0000:00:11.0 0000:00:10.0 00:19:23.783 * Looking for test storage... 00:19:23.783 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lcov --version 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@333 -- # local ver1 ver1_l 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@334 -- # local ver2 ver2_l 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@336 -- # IFS=.-: 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@336 -- # read -ra ver1 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@337 -- # IFS=.-: 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@337 -- # read -ra ver2 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@338 -- # local 'op=<' 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@340 -- # ver1_l=2 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@341 -- # ver2_l=1 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@344 -- # case "$op" in 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@345 -- # : 1 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v = 0 )) 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@365 -- # decimal 1 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=1 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 1 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@365 -- # ver1[v]=1 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@366 -- # decimal 2 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@353 -- # local d=2 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@355 -- # echo 2 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@366 -- # ver2[v]=2 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:19:23.783 02:14:48 ftl.ftl_trim -- scripts/common.sh@368 -- # return 0 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:19:23.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.783 --rc genhtml_branch_coverage=1 00:19:23.783 --rc genhtml_function_coverage=1 00:19:23.783 --rc genhtml_legend=1 00:19:23.783 --rc geninfo_all_blocks=1 00:19:23.783 --rc geninfo_unexecuted_blocks=1 00:19:23.783 00:19:23.783 ' 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:19:23.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.783 --rc genhtml_branch_coverage=1 00:19:23.783 --rc genhtml_function_coverage=1 00:19:23.783 --rc genhtml_legend=1 00:19:23.783 --rc geninfo_all_blocks=1 00:19:23.783 --rc geninfo_unexecuted_blocks=1 00:19:23.783 00:19:23.783 ' 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:19:23.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.783 --rc genhtml_branch_coverage=1 00:19:23.783 --rc genhtml_function_coverage=1 00:19:23.783 --rc genhtml_legend=1 00:19:23.783 --rc geninfo_all_blocks=1 00:19:23.783 --rc geninfo_unexecuted_blocks=1 00:19:23.783 00:19:23.783 ' 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:19:23.783 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:19:23.783 --rc genhtml_branch_coverage=1 00:19:23.783 --rc genhtml_function_coverage=1 00:19:23.783 --rc genhtml_legend=1 00:19:23.783 --rc geninfo_all_blocks=1 00:19:23.783 --rc geninfo_unexecuted_blocks=1 00:19:23.783 00:19:23.783 ' 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@10 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/trim.sh 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@23 -- # spdk_ini_pid= 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@12 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@23 -- # device=0000:00:11.0 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@24 -- # cache_device=0000:00:10.0 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@25 -- # timeout=240 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@26 -- # data_size_in_blocks=65536 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@27 -- # unmap_size_in_blocks=1024 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@29 -- # [[ y != y ]] 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@34 -- # export FTL_BDEV_NAME=ftl0 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@34 -- # FTL_BDEV_NAME=ftl0 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@35 -- # export FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@35 -- # FTL_JSON_CONF=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@37 -- # trap 'fio_kill; exit 1' SIGINT SIGTERM EXIT 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@40 -- # svcpid=78159 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@41 -- # waitforlisten 78159 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78159 ']' 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:23.783 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:23.783 02:14:48 ftl.ftl_trim -- ftl/trim.sh@39 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x7 00:19:23.783 02:14:48 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:23.783 [2024-12-15 02:14:48.524037] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:23.783 [2024-12-15 02:14:48.524412] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x7 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78159 ] 00:19:24.044 [2024-12-15 02:14:48.688510] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 3 00:19:24.306 [2024-12-15 02:14:48.815832] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:19:24.306 [2024-12-15 02:14:48.816136] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 2 00:19:24.306 [2024-12-15 02:14:48.816269] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:24.877 02:14:49 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:24.877 02:14:49 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:24.877 02:14:49 ftl.ftl_trim -- ftl/trim.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:19:24.877 02:14:49 ftl.ftl_trim -- ftl/common.sh@54 -- # local name=nvme0 00:19:24.877 02:14:49 ftl.ftl_trim -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:19:24.877 02:14:49 ftl.ftl_trim -- ftl/common.sh@56 -- # local size=103424 00:19:24.877 02:14:49 ftl.ftl_trim -- ftl/common.sh@59 -- # local base_bdev 00:19:24.877 02:14:49 ftl.ftl_trim -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:19:25.450 02:14:49 ftl.ftl_trim -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:19:25.450 02:14:49 ftl.ftl_trim -- ftl/common.sh@62 -- # local base_size 00:19:25.450 02:14:49 ftl.ftl_trim -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:19:25.450 02:14:49 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:19:25.450 02:14:49 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:25.450 02:14:49 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:25.450 02:14:49 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:25.450 02:14:49 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:19:25.450 02:14:50 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:25.450 { 00:19:25.450 "name": "nvme0n1", 00:19:25.450 "aliases": [ 00:19:25.450 "dc63885d-4360-45fa-be51-114c01b5d9e1" 00:19:25.450 ], 00:19:25.450 "product_name": "NVMe disk", 00:19:25.450 "block_size": 4096, 00:19:25.450 "num_blocks": 1310720, 00:19:25.450 "uuid": "dc63885d-4360-45fa-be51-114c01b5d9e1", 00:19:25.450 "numa_id": -1, 00:19:25.450 "assigned_rate_limits": { 00:19:25.450 "rw_ios_per_sec": 0, 00:19:25.450 "rw_mbytes_per_sec": 0, 00:19:25.450 "r_mbytes_per_sec": 0, 00:19:25.450 "w_mbytes_per_sec": 0 00:19:25.450 }, 00:19:25.450 "claimed": true, 00:19:25.451 "claim_type": "read_many_write_one", 00:19:25.451 "zoned": false, 00:19:25.451 "supported_io_types": { 00:19:25.451 "read": true, 00:19:25.451 "write": true, 00:19:25.451 "unmap": true, 00:19:25.451 "flush": true, 00:19:25.451 "reset": true, 00:19:25.451 "nvme_admin": true, 00:19:25.451 "nvme_io": true, 00:19:25.451 "nvme_io_md": false, 00:19:25.451 "write_zeroes": true, 00:19:25.451 "zcopy": false, 00:19:25.451 "get_zone_info": false, 00:19:25.451 "zone_management": false, 00:19:25.451 "zone_append": false, 00:19:25.451 "compare": true, 00:19:25.451 "compare_and_write": false, 00:19:25.451 "abort": true, 00:19:25.451 "seek_hole": false, 00:19:25.451 "seek_data": false, 00:19:25.451 "copy": true, 00:19:25.451 "nvme_iov_md": false 00:19:25.451 }, 00:19:25.451 "driver_specific": { 00:19:25.451 "nvme": [ 00:19:25.451 { 00:19:25.451 "pci_address": "0000:00:11.0", 00:19:25.451 "trid": { 00:19:25.451 "trtype": "PCIe", 00:19:25.451 "traddr": "0000:00:11.0" 00:19:25.451 }, 00:19:25.451 "ctrlr_data": { 00:19:25.451 "cntlid": 0, 00:19:25.451 "vendor_id": "0x1b36", 00:19:25.451 "model_number": "QEMU NVMe Ctrl", 00:19:25.451 "serial_number": "12341", 00:19:25.451 "firmware_revision": "8.0.0", 00:19:25.451 "subnqn": "nqn.2019-08.org.qemu:12341", 00:19:25.451 "oacs": { 00:19:25.451 "security": 0, 00:19:25.451 "format": 1, 00:19:25.451 "firmware": 0, 00:19:25.451 "ns_manage": 1 00:19:25.451 }, 00:19:25.451 "multi_ctrlr": false, 00:19:25.451 "ana_reporting": false 00:19:25.451 }, 00:19:25.451 "vs": { 00:19:25.451 "nvme_version": "1.4" 00:19:25.451 }, 00:19:25.451 "ns_data": { 00:19:25.451 "id": 1, 00:19:25.451 "can_share": false 00:19:25.451 } 00:19:25.451 } 00:19:25.451 ], 00:19:25.451 "mp_policy": "active_passive" 00:19:25.451 } 00:19:25.451 } 00:19:25.451 ]' 00:19:25.451 02:14:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:25.451 02:14:50 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:25.451 02:14:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:25.712 02:14:50 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=1310720 00:19:25.712 02:14:50 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:19:25.712 02:14:50 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 5120 00:19:25.712 02:14:50 ftl.ftl_trim -- ftl/common.sh@63 -- # base_size=5120 00:19:25.712 02:14:50 ftl.ftl_trim -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:19:25.712 02:14:50 ftl.ftl_trim -- ftl/common.sh@67 -- # clear_lvols 00:19:25.712 02:14:50 ftl.ftl_trim -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:19:25.712 02:14:50 ftl.ftl_trim -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:19:25.712 02:14:50 ftl.ftl_trim -- ftl/common.sh@28 -- # stores=49372170-722d-4132-9262-b2850960d3a1 00:19:25.712 02:14:50 ftl.ftl_trim -- ftl/common.sh@29 -- # for lvs in $stores 00:19:25.712 02:14:50 ftl.ftl_trim -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 49372170-722d-4132-9262-b2850960d3a1 00:19:25.973 02:14:50 ftl.ftl_trim -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:19:26.234 02:14:50 ftl.ftl_trim -- ftl/common.sh@68 -- # lvs=de5a7ace-3a16-4412-a20b-304b998b6e1e 00:19:26.234 02:14:50 ftl.ftl_trim -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u de5a7ace-3a16-4412-a20b-304b998b6e1e 00:19:26.495 02:14:51 ftl.ftl_trim -- ftl/trim.sh@43 -- # split_bdev=599f6e06-49c5-4af7-8690-9432b2124379 00:19:26.495 02:14:51 ftl.ftl_trim -- ftl/trim.sh@44 -- # create_nv_cache_bdev nvc0 0000:00:10.0 599f6e06-49c5-4af7-8690-9432b2124379 00:19:26.495 02:14:51 ftl.ftl_trim -- ftl/common.sh@35 -- # local name=nvc0 00:19:26.495 02:14:51 ftl.ftl_trim -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:19:26.495 02:14:51 ftl.ftl_trim -- ftl/common.sh@37 -- # local base_bdev=599f6e06-49c5-4af7-8690-9432b2124379 00:19:26.495 02:14:51 ftl.ftl_trim -- ftl/common.sh@38 -- # local cache_size= 00:19:26.495 02:14:51 ftl.ftl_trim -- ftl/common.sh@41 -- # get_bdev_size 599f6e06-49c5-4af7-8690-9432b2124379 00:19:26.495 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=599f6e06-49c5-4af7-8690-9432b2124379 00:19:26.495 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:26.495 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:26.495 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:26.495 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 599f6e06-49c5-4af7-8690-9432b2124379 00:19:26.756 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:26.756 { 00:19:26.756 "name": "599f6e06-49c5-4af7-8690-9432b2124379", 00:19:26.756 "aliases": [ 00:19:26.756 "lvs/nvme0n1p0" 00:19:26.756 ], 00:19:26.756 "product_name": "Logical Volume", 00:19:26.756 "block_size": 4096, 00:19:26.756 "num_blocks": 26476544, 00:19:26.756 "uuid": "599f6e06-49c5-4af7-8690-9432b2124379", 00:19:26.756 "assigned_rate_limits": { 00:19:26.756 "rw_ios_per_sec": 0, 00:19:26.756 "rw_mbytes_per_sec": 0, 00:19:26.756 "r_mbytes_per_sec": 0, 00:19:26.756 "w_mbytes_per_sec": 0 00:19:26.756 }, 00:19:26.756 "claimed": false, 00:19:26.756 "zoned": false, 00:19:26.756 "supported_io_types": { 00:19:26.756 "read": true, 00:19:26.756 "write": true, 00:19:26.756 "unmap": true, 00:19:26.756 "flush": false, 00:19:26.756 "reset": true, 00:19:26.756 "nvme_admin": false, 00:19:26.756 "nvme_io": false, 00:19:26.756 "nvme_io_md": false, 00:19:26.756 "write_zeroes": true, 00:19:26.756 "zcopy": false, 00:19:26.756 "get_zone_info": false, 00:19:26.756 "zone_management": false, 00:19:26.756 "zone_append": false, 00:19:26.756 "compare": false, 00:19:26.756 "compare_and_write": false, 00:19:26.756 "abort": false, 00:19:26.756 "seek_hole": true, 00:19:26.756 "seek_data": true, 00:19:26.756 "copy": false, 00:19:26.756 "nvme_iov_md": false 00:19:26.756 }, 00:19:26.756 "driver_specific": { 00:19:26.756 "lvol": { 00:19:26.756 "lvol_store_uuid": "de5a7ace-3a16-4412-a20b-304b998b6e1e", 00:19:26.756 "base_bdev": "nvme0n1", 00:19:26.756 "thin_provision": true, 00:19:26.756 "num_allocated_clusters": 0, 00:19:26.756 "snapshot": false, 00:19:26.756 "clone": false, 00:19:26.756 "esnap_clone": false 00:19:26.756 } 00:19:26.756 } 00:19:26.756 } 00:19:26.756 ]' 00:19:26.756 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:26.756 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:26.756 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:26.756 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:26.756 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:26.756 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:26.756 02:14:51 ftl.ftl_trim -- ftl/common.sh@41 -- # local base_size=5171 00:19:26.756 02:14:51 ftl.ftl_trim -- ftl/common.sh@44 -- # local nvc_bdev 00:19:26.756 02:14:51 ftl.ftl_trim -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:19:27.016 02:14:51 ftl.ftl_trim -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:19:27.016 02:14:51 ftl.ftl_trim -- ftl/common.sh@47 -- # [[ -z '' ]] 00:19:27.016 02:14:51 ftl.ftl_trim -- ftl/common.sh@48 -- # get_bdev_size 599f6e06-49c5-4af7-8690-9432b2124379 00:19:27.016 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=599f6e06-49c5-4af7-8690-9432b2124379 00:19:27.016 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:27.016 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:27.016 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:27.016 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 599f6e06-49c5-4af7-8690-9432b2124379 00:19:27.275 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:27.275 { 00:19:27.275 "name": "599f6e06-49c5-4af7-8690-9432b2124379", 00:19:27.275 "aliases": [ 00:19:27.275 "lvs/nvme0n1p0" 00:19:27.275 ], 00:19:27.275 "product_name": "Logical Volume", 00:19:27.275 "block_size": 4096, 00:19:27.275 "num_blocks": 26476544, 00:19:27.275 "uuid": "599f6e06-49c5-4af7-8690-9432b2124379", 00:19:27.275 "assigned_rate_limits": { 00:19:27.275 "rw_ios_per_sec": 0, 00:19:27.275 "rw_mbytes_per_sec": 0, 00:19:27.275 "r_mbytes_per_sec": 0, 00:19:27.275 "w_mbytes_per_sec": 0 00:19:27.275 }, 00:19:27.275 "claimed": false, 00:19:27.275 "zoned": false, 00:19:27.275 "supported_io_types": { 00:19:27.275 "read": true, 00:19:27.275 "write": true, 00:19:27.275 "unmap": true, 00:19:27.275 "flush": false, 00:19:27.275 "reset": true, 00:19:27.275 "nvme_admin": false, 00:19:27.275 "nvme_io": false, 00:19:27.275 "nvme_io_md": false, 00:19:27.275 "write_zeroes": true, 00:19:27.275 "zcopy": false, 00:19:27.275 "get_zone_info": false, 00:19:27.275 "zone_management": false, 00:19:27.275 "zone_append": false, 00:19:27.275 "compare": false, 00:19:27.275 "compare_and_write": false, 00:19:27.275 "abort": false, 00:19:27.275 "seek_hole": true, 00:19:27.275 "seek_data": true, 00:19:27.275 "copy": false, 00:19:27.275 "nvme_iov_md": false 00:19:27.275 }, 00:19:27.275 "driver_specific": { 00:19:27.275 "lvol": { 00:19:27.275 "lvol_store_uuid": "de5a7ace-3a16-4412-a20b-304b998b6e1e", 00:19:27.275 "base_bdev": "nvme0n1", 00:19:27.275 "thin_provision": true, 00:19:27.275 "num_allocated_clusters": 0, 00:19:27.275 "snapshot": false, 00:19:27.275 "clone": false, 00:19:27.275 "esnap_clone": false 00:19:27.275 } 00:19:27.275 } 00:19:27.275 } 00:19:27.275 ]' 00:19:27.275 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:27.275 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:27.275 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:27.275 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:27.275 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:27.275 02:14:51 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:27.275 02:14:51 ftl.ftl_trim -- ftl/common.sh@48 -- # cache_size=5171 00:19:27.275 02:14:51 ftl.ftl_trim -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:19:27.533 02:14:52 ftl.ftl_trim -- ftl/trim.sh@44 -- # nv_cache=nvc0n1p0 00:19:27.533 02:14:52 ftl.ftl_trim -- ftl/trim.sh@46 -- # l2p_percentage=60 00:19:27.533 02:14:52 ftl.ftl_trim -- ftl/trim.sh@47 -- # get_bdev_size 599f6e06-49c5-4af7-8690-9432b2124379 00:19:27.533 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1382 -- # local bdev_name=599f6e06-49c5-4af7-8690-9432b2124379 00:19:27.533 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1383 -- # local bdev_info 00:19:27.533 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1384 -- # local bs 00:19:27.533 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1385 -- # local nb 00:19:27.533 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 599f6e06-49c5-4af7-8690-9432b2124379 00:19:27.791 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:19:27.791 { 00:19:27.791 "name": "599f6e06-49c5-4af7-8690-9432b2124379", 00:19:27.791 "aliases": [ 00:19:27.791 "lvs/nvme0n1p0" 00:19:27.791 ], 00:19:27.791 "product_name": "Logical Volume", 00:19:27.791 "block_size": 4096, 00:19:27.791 "num_blocks": 26476544, 00:19:27.791 "uuid": "599f6e06-49c5-4af7-8690-9432b2124379", 00:19:27.791 "assigned_rate_limits": { 00:19:27.791 "rw_ios_per_sec": 0, 00:19:27.791 "rw_mbytes_per_sec": 0, 00:19:27.791 "r_mbytes_per_sec": 0, 00:19:27.791 "w_mbytes_per_sec": 0 00:19:27.791 }, 00:19:27.791 "claimed": false, 00:19:27.791 "zoned": false, 00:19:27.791 "supported_io_types": { 00:19:27.791 "read": true, 00:19:27.791 "write": true, 00:19:27.791 "unmap": true, 00:19:27.791 "flush": false, 00:19:27.791 "reset": true, 00:19:27.791 "nvme_admin": false, 00:19:27.791 "nvme_io": false, 00:19:27.791 "nvme_io_md": false, 00:19:27.791 "write_zeroes": true, 00:19:27.791 "zcopy": false, 00:19:27.791 "get_zone_info": false, 00:19:27.791 "zone_management": false, 00:19:27.791 "zone_append": false, 00:19:27.791 "compare": false, 00:19:27.791 "compare_and_write": false, 00:19:27.791 "abort": false, 00:19:27.791 "seek_hole": true, 00:19:27.791 "seek_data": true, 00:19:27.791 "copy": false, 00:19:27.791 "nvme_iov_md": false 00:19:27.791 }, 00:19:27.791 "driver_specific": { 00:19:27.791 "lvol": { 00:19:27.791 "lvol_store_uuid": "de5a7ace-3a16-4412-a20b-304b998b6e1e", 00:19:27.791 "base_bdev": "nvme0n1", 00:19:27.791 "thin_provision": true, 00:19:27.791 "num_allocated_clusters": 0, 00:19:27.791 "snapshot": false, 00:19:27.791 "clone": false, 00:19:27.791 "esnap_clone": false 00:19:27.791 } 00:19:27.791 } 00:19:27.791 } 00:19:27.791 ]' 00:19:27.791 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:19:27.791 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1387 -- # bs=4096 00:19:27.791 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:19:27.791 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1388 -- # nb=26476544 00:19:27.791 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:19:27.791 02:14:52 ftl.ftl_trim -- common/autotest_common.sh@1392 -- # echo 103424 00:19:27.791 02:14:52 ftl.ftl_trim -- ftl/trim.sh@47 -- # l2p_dram_size_mb=60 00:19:27.791 02:14:52 ftl.ftl_trim -- ftl/trim.sh@49 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 599f6e06-49c5-4af7-8690-9432b2124379 -c nvc0n1p0 --core_mask 7 --l2p_dram_limit 60 --overprovisioning 10 00:19:27.791 [2024-12-15 02:14:52.542341] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.791 [2024-12-15 02:14:52.542383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:27.791 [2024-12-15 02:14:52.542398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:27.791 [2024-12-15 02:14:52.542406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.791 [2024-12-15 02:14:52.544707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.791 [2024-12-15 02:14:52.544737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:27.791 [2024-12-15 02:14:52.544747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.281 ms 00:19:27.791 [2024-12-15 02:14:52.544754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.791 [2024-12-15 02:14:52.544821] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:27.791 [2024-12-15 02:14:52.545359] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:27.791 [2024-12-15 02:14:52.545384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.791 [2024-12-15 02:14:52.545392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:27.791 [2024-12-15 02:14:52.545401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.568 ms 00:19:27.791 [2024-12-15 02:14:52.545407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:27.791 [2024-12-15 02:14:52.545499] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b8fe8d93-efde-44df-91f9-97e8d7b46861 00:19:27.791 [2024-12-15 02:14:52.546768] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:27.791 [2024-12-15 02:14:52.546800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:19:27.791 [2024-12-15 02:14:52.546810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:19:27.791 [2024-12-15 02:14:52.546820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.072 [2024-12-15 02:14:52.553656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.072 [2024-12-15 02:14:52.553680] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:28.072 [2024-12-15 02:14:52.553690] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.775 ms 00:19:28.072 [2024-12-15 02:14:52.553697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.072 [2024-12-15 02:14:52.553797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.072 [2024-12-15 02:14:52.553811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:28.072 [2024-12-15 02:14:52.553819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:28.072 [2024-12-15 02:14:52.553828] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.072 [2024-12-15 02:14:52.553858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.072 [2024-12-15 02:14:52.553866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:28.072 [2024-12-15 02:14:52.553873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:19:28.072 [2024-12-15 02:14:52.553883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.072 [2024-12-15 02:14:52.553909] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:28.072 [2024-12-15 02:14:52.557133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.072 [2024-12-15 02:14:52.557159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:28.072 [2024-12-15 02:14:52.557171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.226 ms 00:19:28.072 [2024-12-15 02:14:52.557177] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.072 [2024-12-15 02:14:52.557236] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.072 [2024-12-15 02:14:52.557255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:28.072 [2024-12-15 02:14:52.557264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:19:28.072 [2024-12-15 02:14:52.557270] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.072 [2024-12-15 02:14:52.557296] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:19:28.072 [2024-12-15 02:14:52.557409] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:28.072 [2024-12-15 02:14:52.557427] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:28.072 [2024-12-15 02:14:52.557437] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:28.072 [2024-12-15 02:14:52.557446] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:28.072 [2024-12-15 02:14:52.557454] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:28.072 [2024-12-15 02:14:52.557462] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:28.072 [2024-12-15 02:14:52.557468] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:28.072 [2024-12-15 02:14:52.557477] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:28.072 [2024-12-15 02:14:52.557486] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:28.072 [2024-12-15 02:14:52.557493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.072 [2024-12-15 02:14:52.557499] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:28.072 [2024-12-15 02:14:52.557508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.199 ms 00:19:28.072 [2024-12-15 02:14:52.557513] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.072 [2024-12-15 02:14:52.557592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.072 [2024-12-15 02:14:52.557599] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:28.072 [2024-12-15 02:14:52.557606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:19:28.072 [2024-12-15 02:14:52.557612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.072 [2024-12-15 02:14:52.557719] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:28.072 [2024-12-15 02:14:52.557731] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:28.072 [2024-12-15 02:14:52.557740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.072 [2024-12-15 02:14:52.557746] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.072 [2024-12-15 02:14:52.557755] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:28.072 [2024-12-15 02:14:52.557760] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:28.072 [2024-12-15 02:14:52.557767] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:28.072 [2024-12-15 02:14:52.557772] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:28.072 [2024-12-15 02:14:52.557779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:28.072 [2024-12-15 02:14:52.557784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.072 [2024-12-15 02:14:52.557793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:28.072 [2024-12-15 02:14:52.557798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:28.072 [2024-12-15 02:14:52.557804] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:28.072 [2024-12-15 02:14:52.557810] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:28.072 [2024-12-15 02:14:52.557817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:28.072 [2024-12-15 02:14:52.557822] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.072 [2024-12-15 02:14:52.557831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:28.072 [2024-12-15 02:14:52.557836] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:28.072 [2024-12-15 02:14:52.557843] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.072 [2024-12-15 02:14:52.557849] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:28.072 [2024-12-15 02:14:52.557855] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:28.072 [2024-12-15 02:14:52.557860] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.072 [2024-12-15 02:14:52.557867] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:28.072 [2024-12-15 02:14:52.557872] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:28.073 [2024-12-15 02:14:52.557879] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.073 [2024-12-15 02:14:52.557883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:28.073 [2024-12-15 02:14:52.557890] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:28.073 [2024-12-15 02:14:52.557895] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.073 [2024-12-15 02:14:52.557903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:28.073 [2024-12-15 02:14:52.557908] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:28.073 [2024-12-15 02:14:52.557914] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:28.073 [2024-12-15 02:14:52.557920] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:28.073 [2024-12-15 02:14:52.557927] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:28.073 [2024-12-15 02:14:52.557932] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.073 [2024-12-15 02:14:52.557939] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:28.073 [2024-12-15 02:14:52.557944] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:28.073 [2024-12-15 02:14:52.557951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:28.073 [2024-12-15 02:14:52.557957] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:28.073 [2024-12-15 02:14:52.557963] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:28.073 [2024-12-15 02:14:52.557968] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.073 [2024-12-15 02:14:52.557975] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:28.073 [2024-12-15 02:14:52.557980] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:28.073 [2024-12-15 02:14:52.557986] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.073 [2024-12-15 02:14:52.557992] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:28.073 [2024-12-15 02:14:52.557999] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:28.073 [2024-12-15 02:14:52.558004] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:28.073 [2024-12-15 02:14:52.558012] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:28.073 [2024-12-15 02:14:52.558018] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:28.073 [2024-12-15 02:14:52.558026] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:28.073 [2024-12-15 02:14:52.558032] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:28.073 [2024-12-15 02:14:52.558039] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:28.073 [2024-12-15 02:14:52.558045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:28.073 [2024-12-15 02:14:52.558052] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:28.073 [2024-12-15 02:14:52.558058] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:28.073 [2024-12-15 02:14:52.558067] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.073 [2024-12-15 02:14:52.558075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:28.073 [2024-12-15 02:14:52.558082] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:28.073 [2024-12-15 02:14:52.558088] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:28.073 [2024-12-15 02:14:52.558094] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:28.073 [2024-12-15 02:14:52.558100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:28.073 [2024-12-15 02:14:52.558107] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:28.073 [2024-12-15 02:14:52.558114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:28.073 [2024-12-15 02:14:52.558122] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:28.073 [2024-12-15 02:14:52.558127] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:28.073 [2024-12-15 02:14:52.558136] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:28.073 [2024-12-15 02:14:52.558141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:28.073 [2024-12-15 02:14:52.558152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:28.073 [2024-12-15 02:14:52.558157] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:28.073 [2024-12-15 02:14:52.558164] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:28.073 [2024-12-15 02:14:52.558169] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:28.073 [2024-12-15 02:14:52.558179] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:28.073 [2024-12-15 02:14:52.558185] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:28.073 [2024-12-15 02:14:52.558192] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:28.073 [2024-12-15 02:14:52.558211] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:28.073 [2024-12-15 02:14:52.558218] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:28.073 [2024-12-15 02:14:52.558225] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:28.073 [2024-12-15 02:14:52.558232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:28.073 [2024-12-15 02:14:52.558238] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.560 ms 00:19:28.073 [2024-12-15 02:14:52.558246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:28.073 [2024-12-15 02:14:52.558311] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:19:28.073 [2024-12-15 02:14:52.558323] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:19:30.603 [2024-12-15 02:14:54.875498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.875556] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:19:30.603 [2024-12-15 02:14:54.875573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2317.176 ms 00:19:30.603 [2024-12-15 02:14:54.875584] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:54.903636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.903972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:30.603 [2024-12-15 02:14:54.904060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.823 ms 00:19:30.603 [2024-12-15 02:14:54.904112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:54.904319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.904474] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:30.603 [2024-12-15 02:14:54.904565] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:30.603 [2024-12-15 02:14:54.904609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:54.944712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.944933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:30.603 [2024-12-15 02:14:54.945115] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 40.030 ms 00:19:30.603 [2024-12-15 02:14:54.945534] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:54.945699] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.945869] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:30.603 [2024-12-15 02:14:54.946041] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:30.603 [2024-12-15 02:14:54.946117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:54.946705] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.946849] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:30.603 [2024-12-15 02:14:54.947266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:19:30.603 [2024-12-15 02:14:54.947371] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:54.947572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.947797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:30.603 [2024-12-15 02:14:54.947904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:19:30.603 [2024-12-15 02:14:54.948256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:54.964345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.964501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:30.603 [2024-12-15 02:14:54.964610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.917 ms 00:19:30.603 [2024-12-15 02:14:54.964674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:54.976798] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:30.603 [2024-12-15 02:14:54.993942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:54.994043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:30.603 [2024-12-15 02:14:54.994158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.115 ms 00:19:30.603 [2024-12-15 02:14:54.994248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:55.058735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:55.058854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:19:30.603 [2024-12-15 02:14:55.058910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.389 ms 00:19:30.603 [2024-12-15 02:14:55.058934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:55.059165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:55.059251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:30.603 [2024-12-15 02:14:55.059282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:19:30.603 [2024-12-15 02:14:55.059303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:55.083412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:55.083516] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:19:30.603 [2024-12-15 02:14:55.083569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.037 ms 00:19:30.603 [2024-12-15 02:14:55.083592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.603 [2024-12-15 02:14:55.106157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.603 [2024-12-15 02:14:55.106273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:19:30.603 [2024-12-15 02:14:55.106339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 22.504 ms 00:19:30.603 [2024-12-15 02:14:55.106361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.604 [2024-12-15 02:14:55.106955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.604 [2024-12-15 02:14:55.107035] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:30.604 [2024-12-15 02:14:55.107083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.528 ms 00:19:30.604 [2024-12-15 02:14:55.107106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.604 [2024-12-15 02:14:55.176359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.604 [2024-12-15 02:14:55.176468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:19:30.604 [2024-12-15 02:14:55.176524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 69.184 ms 00:19:30.604 [2024-12-15 02:14:55.176548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.604 [2024-12-15 02:14:55.201278] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.604 [2024-12-15 02:14:55.201387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:19:30.604 [2024-12-15 02:14:55.201439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.602 ms 00:19:30.604 [2024-12-15 02:14:55.201462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.604 [2024-12-15 02:14:55.224712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.604 [2024-12-15 02:14:55.224814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:19:30.604 [2024-12-15 02:14:55.224866] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.182 ms 00:19:30.604 [2024-12-15 02:14:55.224888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.604 [2024-12-15 02:14:55.248333] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.604 [2024-12-15 02:14:55.248453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:30.604 [2024-12-15 02:14:55.248508] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.364 ms 00:19:30.604 [2024-12-15 02:14:55.248530] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.604 [2024-12-15 02:14:55.248604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.604 [2024-12-15 02:14:55.248633] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:30.604 [2024-12-15 02:14:55.248658] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:30.604 [2024-12-15 02:14:55.248677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.604 [2024-12-15 02:14:55.248771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:30.604 [2024-12-15 02:14:55.248885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:30.604 [2024-12-15 02:14:55.248908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.034 ms 00:19:30.604 [2024-12-15 02:14:55.248928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:30.604 [2024-12-15 02:14:55.249889] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:30.604 [2024-12-15 02:14:55.253055] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 2707.255 ms, result 0 00:19:30.604 [2024-12-15 02:14:55.254051] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:30.604 { 00:19:30.604 "name": "ftl0", 00:19:30.604 "uuid": "b8fe8d93-efde-44df-91f9-97e8d7b46861" 00:19:30.604 } 00:19:30.604 02:14:55 ftl.ftl_trim -- ftl/trim.sh@51 -- # waitforbdev ftl0 00:19:30.604 02:14:55 ftl.ftl_trim -- common/autotest_common.sh@903 -- # local bdev_name=ftl0 00:19:30.604 02:14:55 ftl.ftl_trim -- common/autotest_common.sh@904 -- # local bdev_timeout= 00:19:30.604 02:14:55 ftl.ftl_trim -- common/autotest_common.sh@905 -- # local i 00:19:30.604 02:14:55 ftl.ftl_trim -- common/autotest_common.sh@906 -- # [[ -z '' ]] 00:19:30.604 02:14:55 ftl.ftl_trim -- common/autotest_common.sh@906 -- # bdev_timeout=2000 00:19:30.604 02:14:55 ftl.ftl_trim -- common/autotest_common.sh@908 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_wait_for_examine 00:19:30.861 02:14:55 ftl.ftl_trim -- common/autotest_common.sh@910 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 -t 2000 00:19:31.118 [ 00:19:31.118 { 00:19:31.118 "name": "ftl0", 00:19:31.118 "aliases": [ 00:19:31.118 "b8fe8d93-efde-44df-91f9-97e8d7b46861" 00:19:31.118 ], 00:19:31.118 "product_name": "FTL disk", 00:19:31.118 "block_size": 4096, 00:19:31.118 "num_blocks": 23592960, 00:19:31.118 "uuid": "b8fe8d93-efde-44df-91f9-97e8d7b46861", 00:19:31.118 "assigned_rate_limits": { 00:19:31.118 "rw_ios_per_sec": 0, 00:19:31.118 "rw_mbytes_per_sec": 0, 00:19:31.118 "r_mbytes_per_sec": 0, 00:19:31.118 "w_mbytes_per_sec": 0 00:19:31.118 }, 00:19:31.118 "claimed": false, 00:19:31.118 "zoned": false, 00:19:31.118 "supported_io_types": { 00:19:31.118 "read": true, 00:19:31.118 "write": true, 00:19:31.118 "unmap": true, 00:19:31.118 "flush": true, 00:19:31.118 "reset": false, 00:19:31.118 "nvme_admin": false, 00:19:31.118 "nvme_io": false, 00:19:31.118 "nvme_io_md": false, 00:19:31.118 "write_zeroes": true, 00:19:31.118 "zcopy": false, 00:19:31.118 "get_zone_info": false, 00:19:31.118 "zone_management": false, 00:19:31.118 "zone_append": false, 00:19:31.118 "compare": false, 00:19:31.118 "compare_and_write": false, 00:19:31.118 "abort": false, 00:19:31.118 "seek_hole": false, 00:19:31.118 "seek_data": false, 00:19:31.118 "copy": false, 00:19:31.118 "nvme_iov_md": false 00:19:31.118 }, 00:19:31.118 "driver_specific": { 00:19:31.118 "ftl": { 00:19:31.118 "base_bdev": "599f6e06-49c5-4af7-8690-9432b2124379", 00:19:31.118 "cache": "nvc0n1p0" 00:19:31.118 } 00:19:31.118 } 00:19:31.118 } 00:19:31.118 ] 00:19:31.118 02:14:55 ftl.ftl_trim -- common/autotest_common.sh@911 -- # return 0 00:19:31.118 02:14:55 ftl.ftl_trim -- ftl/trim.sh@54 -- # echo '{"subsystems": [' 00:19:31.118 02:14:55 ftl.ftl_trim -- ftl/trim.sh@55 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:19:31.118 02:14:55 ftl.ftl_trim -- ftl/trim.sh@56 -- # echo ']}' 00:19:31.118 02:14:55 ftl.ftl_trim -- ftl/trim.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b ftl0 00:19:31.376 02:14:56 ftl.ftl_trim -- ftl/trim.sh@59 -- # bdev_info='[ 00:19:31.376 { 00:19:31.376 "name": "ftl0", 00:19:31.376 "aliases": [ 00:19:31.376 "b8fe8d93-efde-44df-91f9-97e8d7b46861" 00:19:31.376 ], 00:19:31.376 "product_name": "FTL disk", 00:19:31.376 "block_size": 4096, 00:19:31.376 "num_blocks": 23592960, 00:19:31.376 "uuid": "b8fe8d93-efde-44df-91f9-97e8d7b46861", 00:19:31.376 "assigned_rate_limits": { 00:19:31.376 "rw_ios_per_sec": 0, 00:19:31.376 "rw_mbytes_per_sec": 0, 00:19:31.376 "r_mbytes_per_sec": 0, 00:19:31.376 "w_mbytes_per_sec": 0 00:19:31.376 }, 00:19:31.376 "claimed": false, 00:19:31.376 "zoned": false, 00:19:31.376 "supported_io_types": { 00:19:31.376 "read": true, 00:19:31.376 "write": true, 00:19:31.376 "unmap": true, 00:19:31.376 "flush": true, 00:19:31.376 "reset": false, 00:19:31.376 "nvme_admin": false, 00:19:31.376 "nvme_io": false, 00:19:31.376 "nvme_io_md": false, 00:19:31.376 "write_zeroes": true, 00:19:31.376 "zcopy": false, 00:19:31.376 "get_zone_info": false, 00:19:31.376 "zone_management": false, 00:19:31.376 "zone_append": false, 00:19:31.376 "compare": false, 00:19:31.376 "compare_and_write": false, 00:19:31.376 "abort": false, 00:19:31.376 "seek_hole": false, 00:19:31.376 "seek_data": false, 00:19:31.376 "copy": false, 00:19:31.376 "nvme_iov_md": false 00:19:31.376 }, 00:19:31.376 "driver_specific": { 00:19:31.376 "ftl": { 00:19:31.376 "base_bdev": "599f6e06-49c5-4af7-8690-9432b2124379", 00:19:31.376 "cache": "nvc0n1p0" 00:19:31.376 } 00:19:31.376 } 00:19:31.376 } 00:19:31.376 ]' 00:19:31.376 02:14:56 ftl.ftl_trim -- ftl/trim.sh@60 -- # jq '.[] .num_blocks' 00:19:31.376 02:14:56 ftl.ftl_trim -- ftl/trim.sh@60 -- # nb=23592960 00:19:31.376 02:14:56 ftl.ftl_trim -- ftl/trim.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:19:31.636 [2024-12-15 02:14:56.265323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.265361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:31.636 [2024-12-15 02:14:56.265388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:31.636 [2024-12-15 02:14:56.265398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.265427] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:31.636 [2024-12-15 02:14:56.268054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.268076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:31.636 [2024-12-15 02:14:56.268089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.612 ms 00:19:31.636 [2024-12-15 02:14:56.268096] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.268584] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.268601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:31.636 [2024-12-15 02:14:56.268611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.452 ms 00:19:31.636 [2024-12-15 02:14:56.268617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.271396] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.271505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:31.636 [2024-12-15 02:14:56.271520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.752 ms 00:19:31.636 [2024-12-15 02:14:56.271527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.277456] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.277531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:31.636 [2024-12-15 02:14:56.277575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.887 ms 00:19:31.636 [2024-12-15 02:14:56.277608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.296242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.296333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:31.636 [2024-12-15 02:14:56.296379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.548 ms 00:19:31.636 [2024-12-15 02:14:56.296396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.308772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.308864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:31.636 [2024-12-15 02:14:56.308907] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.323 ms 00:19:31.636 [2024-12-15 02:14:56.308928] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.309112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.309164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:31.636 [2024-12-15 02:14:56.309186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.109 ms 00:19:31.636 [2024-12-15 02:14:56.309221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.327296] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.327383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:31.636 [2024-12-15 02:14:56.327423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.970 ms 00:19:31.636 [2024-12-15 02:14:56.327440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.344875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.344968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:31.636 [2024-12-15 02:14:56.345009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.363 ms 00:19:31.636 [2024-12-15 02:14:56.345027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.362134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.362230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:31.636 [2024-12-15 02:14:56.362272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.036 ms 00:19:31.636 [2024-12-15 02:14:56.362289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.379423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.636 [2024-12-15 02:14:56.379510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:31.636 [2024-12-15 02:14:56.379551] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.028 ms 00:19:31.636 [2024-12-15 02:14:56.379568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.636 [2024-12-15 02:14:56.379619] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:31.636 [2024-12-15 02:14:56.379643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379668] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379935] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.379981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380062] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380697] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380719] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380838] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:31.636 [2024-12-15 02:14:56.380933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.380981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381640] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.381990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382037] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382162] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382541] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382694] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.382980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.383002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.383046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:31.637 [2024-12-15 02:14:56.383076] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:31.637 [2024-12-15 02:14:56.383096] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8fe8d93-efde-44df-91f9-97e8d7b46861 00:19:31.637 [2024-12-15 02:14:56.383157] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:31.637 [2024-12-15 02:14:56.383214] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:31.637 [2024-12-15 02:14:56.383232] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:31.637 [2024-12-15 02:14:56.383251] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:31.637 [2024-12-15 02:14:56.383266] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:31.637 [2024-12-15 02:14:56.383283] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:31.637 [2024-12-15 02:14:56.383298] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:31.637 [2024-12-15 02:14:56.383349] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:31.637 [2024-12-15 02:14:56.383405] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:31.637 [2024-12-15 02:14:56.383442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.637 [2024-12-15 02:14:56.383460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:31.637 [2024-12-15 02:14:56.383477] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.824 ms 00:19:31.637 [2024-12-15 02:14:56.383492] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.637 [2024-12-15 02:14:56.393570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.637 [2024-12-15 02:14:56.393654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:31.637 [2024-12-15 02:14:56.393697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.026 ms 00:19:31.637 [2024-12-15 02:14:56.393715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.637 [2024-12-15 02:14:56.394042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:31.637 [2024-12-15 02:14:56.394108] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:31.637 [2024-12-15 02:14:56.394121] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.265 ms 00:19:31.637 [2024-12-15 02:14:56.394128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.430765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.430793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:31.896 [2024-12-15 02:14:56.430804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.430810] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.430899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.430907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:31.896 [2024-12-15 02:14:56.430916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.430922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.430975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.430983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:31.896 [2024-12-15 02:14:56.430996] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.431001] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.431033] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.431040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:31.896 [2024-12-15 02:14:56.431047] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.431053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.497732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.497770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:31.896 [2024-12-15 02:14:56.497782] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.497788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.549032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.549167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:31.896 [2024-12-15 02:14:56.549184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.549192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.549314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.549323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:31.896 [2024-12-15 02:14:56.549333] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.549341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.549384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.549391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:31.896 [2024-12-15 02:14:56.549399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.549405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.549503] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.896 [2024-12-15 02:14:56.549511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:31.896 [2024-12-15 02:14:56.549519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.896 [2024-12-15 02:14:56.549527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.896 [2024-12-15 02:14:56.549575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.897 [2024-12-15 02:14:56.549583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:31.897 [2024-12-15 02:14:56.549590] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.897 [2024-12-15 02:14:56.549596] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.897 [2024-12-15 02:14:56.549648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.897 [2024-12-15 02:14:56.549656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:31.897 [2024-12-15 02:14:56.549665] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.897 [2024-12-15 02:14:56.549671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.897 [2024-12-15 02:14:56.549723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:31.897 [2024-12-15 02:14:56.549730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:31.897 [2024-12-15 02:14:56.549738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:31.897 [2024-12-15 02:14:56.549744] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:31.897 [2024-12-15 02:14:56.549913] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 284.561 ms, result 0 00:19:31.897 true 00:19:31.897 02:14:56 ftl.ftl_trim -- ftl/trim.sh@63 -- # killprocess 78159 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78159 ']' 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78159 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78159 00:19:31.897 killing process with pid 78159 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78159' 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78159 00:19:31.897 02:14:56 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78159 00:19:38.456 02:15:02 ftl.ftl_trim -- ftl/trim.sh@66 -- # dd if=/dev/urandom bs=4K count=65536 00:19:39.028 65536+0 records in 00:19:39.028 65536+0 records out 00:19:39.028 268435456 bytes (268 MB, 256 MiB) copied, 1.10001 s, 244 MB/s 00:19:39.028 02:15:03 ftl.ftl_trim -- ftl/trim.sh@69 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:39.288 [2024-12-15 02:15:03.794144] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:39.288 [2024-12-15 02:15:03.794517] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78336 ] 00:19:39.288 [2024-12-15 02:15:03.952161] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:39.548 [2024-12-15 02:15:04.059212] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:39.548 [2024-12-15 02:15:04.290823] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.548 [2024-12-15 02:15:04.291052] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:39.810 [2024-12-15 02:15:04.452181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.810 [2024-12-15 02:15:04.452244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:39.810 [2024-12-15 02:15:04.452258] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:39.810 [2024-12-15 02:15:04.452266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.810 [2024-12-15 02:15:04.454956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.810 [2024-12-15 02:15:04.454994] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:39.810 [2024-12-15 02:15:04.455004] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.672 ms 00:19:39.810 [2024-12-15 02:15:04.455011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.810 [2024-12-15 02:15:04.455086] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:39.810 [2024-12-15 02:15:04.456039] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:39.810 [2024-12-15 02:15:04.456085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.810 [2024-12-15 02:15:04.456096] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:39.810 [2024-12-15 02:15:04.456105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.006 ms 00:19:39.810 [2024-12-15 02:15:04.456113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.810 [2024-12-15 02:15:04.457375] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:39.810 [2024-12-15 02:15:04.470178] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.810 [2024-12-15 02:15:04.470224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:39.810 [2024-12-15 02:15:04.470235] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.803 ms 00:19:39.811 [2024-12-15 02:15:04.470243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.470334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.811 [2024-12-15 02:15:04.470346] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:39.811 [2024-12-15 02:15:04.470354] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:19:39.811 [2024-12-15 02:15:04.470361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.475509] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.811 [2024-12-15 02:15:04.475653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:39.811 [2024-12-15 02:15:04.475669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.106 ms 00:19:39.811 [2024-12-15 02:15:04.475676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.475770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.811 [2024-12-15 02:15:04.475780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:39.811 [2024-12-15 02:15:04.475788] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:19:39.811 [2024-12-15 02:15:04.475795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.475823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.811 [2024-12-15 02:15:04.475831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:39.811 [2024-12-15 02:15:04.475839] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:19:39.811 [2024-12-15 02:15:04.475846] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.475867] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:39.811 [2024-12-15 02:15:04.479246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.811 [2024-12-15 02:15:04.479273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:39.811 [2024-12-15 02:15:04.479282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.385 ms 00:19:39.811 [2024-12-15 02:15:04.479289] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.479327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.811 [2024-12-15 02:15:04.479335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:39.811 [2024-12-15 02:15:04.479343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:19:39.811 [2024-12-15 02:15:04.479350] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.479370] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:39.811 [2024-12-15 02:15:04.479389] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:39.811 [2024-12-15 02:15:04.479422] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:39.811 [2024-12-15 02:15:04.479437] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:39.811 [2024-12-15 02:15:04.479539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:39.811 [2024-12-15 02:15:04.479550] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:39.811 [2024-12-15 02:15:04.479560] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:39.811 [2024-12-15 02:15:04.479573] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:39.811 [2024-12-15 02:15:04.479581] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:39.811 [2024-12-15 02:15:04.479589] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:39.811 [2024-12-15 02:15:04.479596] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:39.811 [2024-12-15 02:15:04.479603] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:39.811 [2024-12-15 02:15:04.479610] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:39.811 [2024-12-15 02:15:04.479618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.811 [2024-12-15 02:15:04.479625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:39.811 [2024-12-15 02:15:04.479633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:19:39.811 [2024-12-15 02:15:04.479639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.479726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.811 [2024-12-15 02:15:04.479736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:39.811 [2024-12-15 02:15:04.479744] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:39.811 [2024-12-15 02:15:04.479750] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.811 [2024-12-15 02:15:04.479861] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:39.811 [2024-12-15 02:15:04.479872] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:39.811 [2024-12-15 02:15:04.479879] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.811 [2024-12-15 02:15:04.479887] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.811 [2024-12-15 02:15:04.479895] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:39.811 [2024-12-15 02:15:04.479901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:39.811 [2024-12-15 02:15:04.479908] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:39.811 [2024-12-15 02:15:04.479916] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:39.811 [2024-12-15 02:15:04.479923] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:39.811 [2024-12-15 02:15:04.479929] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.811 [2024-12-15 02:15:04.479936] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:39.811 [2024-12-15 02:15:04.479948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:39.811 [2024-12-15 02:15:04.479955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:39.811 [2024-12-15 02:15:04.479962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:39.811 [2024-12-15 02:15:04.479968] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:39.811 [2024-12-15 02:15:04.479977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.811 [2024-12-15 02:15:04.479984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:39.811 [2024-12-15 02:15:04.479990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:39.811 [2024-12-15 02:15:04.479996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480003] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:39.811 [2024-12-15 02:15:04.480010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480016] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.811 [2024-12-15 02:15:04.480023] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:39.811 [2024-12-15 02:15:04.480029] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.811 [2024-12-15 02:15:04.480042] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:39.811 [2024-12-15 02:15:04.480049] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.811 [2024-12-15 02:15:04.480062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:39.811 [2024-12-15 02:15:04.480069] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:39.811 [2024-12-15 02:15:04.480082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:39.811 [2024-12-15 02:15:04.480088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.811 [2024-12-15 02:15:04.480101] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:39.811 [2024-12-15 02:15:04.480107] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:39.811 [2024-12-15 02:15:04.480114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:39.811 [2024-12-15 02:15:04.480120] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:39.811 [2024-12-15 02:15:04.480127] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:39.811 [2024-12-15 02:15:04.480133] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480140] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:39.811 [2024-12-15 02:15:04.480146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:39.811 [2024-12-15 02:15:04.480153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480160] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:39.811 [2024-12-15 02:15:04.480167] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:39.811 [2024-12-15 02:15:04.480176] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:39.811 [2024-12-15 02:15:04.480183] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:39.811 [2024-12-15 02:15:04.480191] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:39.811 [2024-12-15 02:15:04.480217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:39.811 [2024-12-15 02:15:04.480224] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:39.811 [2024-12-15 02:15:04.480231] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:39.811 [2024-12-15 02:15:04.480238] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:39.811 [2024-12-15 02:15:04.480245] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:39.811 [2024-12-15 02:15:04.480253] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:39.811 [2024-12-15 02:15:04.480262] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.811 [2024-12-15 02:15:04.480271] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:39.811 [2024-12-15 02:15:04.480279] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:39.811 [2024-12-15 02:15:04.480286] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:39.812 [2024-12-15 02:15:04.480294] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:39.812 [2024-12-15 02:15:04.480301] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:39.812 [2024-12-15 02:15:04.480308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:39.812 [2024-12-15 02:15:04.480315] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:39.812 [2024-12-15 02:15:04.480323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:39.812 [2024-12-15 02:15:04.480330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:39.812 [2024-12-15 02:15:04.480337] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:39.812 [2024-12-15 02:15:04.480344] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:39.812 [2024-12-15 02:15:04.480351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:39.812 [2024-12-15 02:15:04.480358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:39.812 [2024-12-15 02:15:04.480365] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:39.812 [2024-12-15 02:15:04.480372] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:39.812 [2024-12-15 02:15:04.480380] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:39.812 [2024-12-15 02:15:04.480389] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:39.812 [2024-12-15 02:15:04.480396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:39.812 [2024-12-15 02:15:04.480403] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:39.812 [2024-12-15 02:15:04.480411] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:39.812 [2024-12-15 02:15:04.480418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.812 [2024-12-15 02:15:04.480427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:39.812 [2024-12-15 02:15:04.480434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.625 ms 00:19:39.812 [2024-12-15 02:15:04.480441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.812 [2024-12-15 02:15:04.506987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.812 [2024-12-15 02:15:04.507125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:39.812 [2024-12-15 02:15:04.507141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.481 ms 00:19:39.812 [2024-12-15 02:15:04.507148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.812 [2024-12-15 02:15:04.507289] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.812 [2024-12-15 02:15:04.507300] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:39.812 [2024-12-15 02:15:04.507308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:39.812 [2024-12-15 02:15:04.507315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.812 [2024-12-15 02:15:04.558308] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.812 [2024-12-15 02:15:04.558361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:39.812 [2024-12-15 02:15:04.558376] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.972 ms 00:19:39.812 [2024-12-15 02:15:04.558384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.812 [2024-12-15 02:15:04.558477] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.812 [2024-12-15 02:15:04.558489] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:39.812 [2024-12-15 02:15:04.558497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:39.812 [2024-12-15 02:15:04.558505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.812 [2024-12-15 02:15:04.558867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.812 [2024-12-15 02:15:04.558883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:39.812 [2024-12-15 02:15:04.558892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.341 ms 00:19:39.812 [2024-12-15 02:15:04.558903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:39.812 [2024-12-15 02:15:04.559034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:39.812 [2024-12-15 02:15:04.559043] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:39.812 [2024-12-15 02:15:04.559052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.106 ms 00:19:39.812 [2024-12-15 02:15:04.559059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.073 [2024-12-15 02:15:04.573174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.073 [2024-12-15 02:15:04.573350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:40.073 [2024-12-15 02:15:04.573368] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.096 ms 00:19:40.073 [2024-12-15 02:15:04.573376] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.073 [2024-12-15 02:15:04.586535] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:19:40.074 [2024-12-15 02:15:04.586574] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:40.074 [2024-12-15 02:15:04.586586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.586594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:40.074 [2024-12-15 02:15:04.586604] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.103 ms 00:19:40.074 [2024-12-15 02:15:04.586611] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.611471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.611513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:40.074 [2024-12-15 02:15:04.611524] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.783 ms 00:19:40.074 [2024-12-15 02:15:04.611533] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.624002] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.624040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:40.074 [2024-12-15 02:15:04.624052] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.383 ms 00:19:40.074 [2024-12-15 02:15:04.624059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.636865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.636905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:40.074 [2024-12-15 02:15:04.636917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.725 ms 00:19:40.074 [2024-12-15 02:15:04.636925] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.637633] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.637660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:40.074 [2024-12-15 02:15:04.637671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.591 ms 00:19:40.074 [2024-12-15 02:15:04.637679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.702336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.702400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:40.074 [2024-12-15 02:15:04.702416] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.629 ms 00:19:40.074 [2024-12-15 02:15:04.702425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.713833] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:40.074 [2024-12-15 02:15:04.733548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.733604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:40.074 [2024-12-15 02:15:04.733618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.016 ms 00:19:40.074 [2024-12-15 02:15:04.733627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.733730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.733742] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:40.074 [2024-12-15 02:15:04.733752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:19:40.074 [2024-12-15 02:15:04.733760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.733820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.733830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:40.074 [2024-12-15 02:15:04.733838] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:40.074 [2024-12-15 02:15:04.733847] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.733883] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.733895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:40.074 [2024-12-15 02:15:04.733903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:40.074 [2024-12-15 02:15:04.733911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.733950] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:40.074 [2024-12-15 02:15:04.733962] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.733971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:40.074 [2024-12-15 02:15:04.733980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:40.074 [2024-12-15 02:15:04.733987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.760905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.760957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:40.074 [2024-12-15 02:15:04.760971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.894 ms 00:19:40.074 [2024-12-15 02:15:04.760979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.761124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:40.074 [2024-12-15 02:15:04.761137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:40.074 [2024-12-15 02:15:04.761147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:19:40.074 [2024-12-15 02:15:04.761155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:40.074 [2024-12-15 02:15:04.762301] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:40.074 [2024-12-15 02:15:04.765667] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 309.753 ms, result 0 00:19:40.074 [2024-12-15 02:15:04.767037] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:40.074 [2024-12-15 02:15:04.780828] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:41.526  [2024-12-15T02:15:06.861Z] Copying: 16/256 [MB] (16 MBps) [2024-12-15T02:15:07.799Z] Copying: 43/256 [MB] (27 MBps) [2024-12-15T02:15:09.181Z] Copying: 78/256 [MB] (35 MBps) [2024-12-15T02:15:10.122Z] Copying: 96/256 [MB] (17 MBps) [2024-12-15T02:15:11.066Z] Copying: 114/256 [MB] (18 MBps) [2024-12-15T02:15:12.006Z] Copying: 128/256 [MB] (14 MBps) [2024-12-15T02:15:12.950Z] Copying: 148/256 [MB] (19 MBps) [2024-12-15T02:15:13.892Z] Copying: 162/256 [MB] (14 MBps) [2024-12-15T02:15:14.836Z] Copying: 182/256 [MB] (19 MBps) [2024-12-15T02:15:16.223Z] Copying: 203/256 [MB] (20 MBps) [2024-12-15T02:15:16.795Z] Copying: 220/256 [MB] (16 MBps) [2024-12-15T02:15:17.738Z] Copying: 240/256 [MB] (20 MBps) [2024-12-15T02:15:17.738Z] Copying: 256/256 [MB] (average 19 MBps)[2024-12-15 02:15:17.642880] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:52.973 [2024-12-15 02:15:17.653909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.973 [2024-12-15 02:15:17.653968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:52.973 [2024-12-15 02:15:17.653986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:52.973 [2024-12-15 02:15:17.653996] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.973 [2024-12-15 02:15:17.654031] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:52.973 [2024-12-15 02:15:17.657553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.973 [2024-12-15 02:15:17.657600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:52.973 [2024-12-15 02:15:17.657612] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.506 ms 00:19:52.973 [2024-12-15 02:15:17.657622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.973 [2024-12-15 02:15:17.660600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.973 [2024-12-15 02:15:17.660833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:52.973 [2024-12-15 02:15:17.660855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.946 ms 00:19:52.973 [2024-12-15 02:15:17.660864] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.973 [2024-12-15 02:15:17.669268] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.973 [2024-12-15 02:15:17.669329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:52.973 [2024-12-15 02:15:17.669341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.376 ms 00:19:52.973 [2024-12-15 02:15:17.669351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.973 [2024-12-15 02:15:17.676745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.973 [2024-12-15 02:15:17.676790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:52.973 [2024-12-15 02:15:17.676803] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.345 ms 00:19:52.973 [2024-12-15 02:15:17.676812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.973 [2024-12-15 02:15:17.703067] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.973 [2024-12-15 02:15:17.703117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:52.973 [2024-12-15 02:15:17.703130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.196 ms 00:19:52.973 [2024-12-15 02:15:17.703139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.973 [2024-12-15 02:15:17.721165] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.973 [2024-12-15 02:15:17.721261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:52.973 [2024-12-15 02:15:17.721281] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.931 ms 00:19:52.973 [2024-12-15 02:15:17.721290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:52.973 [2024-12-15 02:15:17.721461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:52.973 [2024-12-15 02:15:17.721476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:52.973 [2024-12-15 02:15:17.721487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.097 ms 00:19:52.973 [2024-12-15 02:15:17.721508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.235 [2024-12-15 02:15:17.749070] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.235 [2024-12-15 02:15:17.749119] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:53.235 [2024-12-15 02:15:17.749132] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.545 ms 00:19:53.235 [2024-12-15 02:15:17.749140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.235 [2024-12-15 02:15:17.775620] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.235 [2024-12-15 02:15:17.775669] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:53.235 [2024-12-15 02:15:17.775682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.371 ms 00:19:53.235 [2024-12-15 02:15:17.775689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.235 [2024-12-15 02:15:17.801144] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.235 [2024-12-15 02:15:17.801191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:53.235 [2024-12-15 02:15:17.801256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.384 ms 00:19:53.235 [2024-12-15 02:15:17.801264] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.235 [2024-12-15 02:15:17.826680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.235 [2024-12-15 02:15:17.826886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:53.235 [2024-12-15 02:15:17.826910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.314 ms 00:19:53.235 [2024-12-15 02:15:17.826918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.235 [2024-12-15 02:15:17.826961] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:53.235 [2024-12-15 02:15:17.826978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:53.235 [2024-12-15 02:15:17.826990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:53.235 [2024-12-15 02:15:17.826999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:53.235 [2024-12-15 02:15:17.827008] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:53.235 [2024-12-15 02:15:17.827015] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:53.235 [2024-12-15 02:15:17.827023] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:53.235 [2024-12-15 02:15:17.827030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:53.235 [2024-12-15 02:15:17.827038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827054] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827061] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827242] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827330] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827513] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827655] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827669] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827755] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:53.236 [2024-12-15 02:15:17.827804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:53.237 [2024-12-15 02:15:17.827812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:53.237 [2024-12-15 02:15:17.827828] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:53.237 [2024-12-15 02:15:17.827836] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8fe8d93-efde-44df-91f9-97e8d7b46861 00:19:53.237 [2024-12-15 02:15:17.827846] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:53.237 [2024-12-15 02:15:17.827855] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:53.237 [2024-12-15 02:15:17.827862] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:53.237 [2024-12-15 02:15:17.827871] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:53.237 [2024-12-15 02:15:17.827878] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:53.237 [2024-12-15 02:15:17.827886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:53.237 [2024-12-15 02:15:17.827893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:53.237 [2024-12-15 02:15:17.827900] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:53.237 [2024-12-15 02:15:17.827907] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:53.237 [2024-12-15 02:15:17.827915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.237 [2024-12-15 02:15:17.827926] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:53.237 [2024-12-15 02:15:17.827935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:19:53.237 [2024-12-15 02:15:17.827942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-12-15 02:15:17.842992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.237 [2024-12-15 02:15:17.843038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:53.237 [2024-12-15 02:15:17.843053] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.012 ms 00:19:53.237 [2024-12-15 02:15:17.843061] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-12-15 02:15:17.843561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:53.237 [2024-12-15 02:15:17.843584] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:53.237 [2024-12-15 02:15:17.843594] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.457 ms 00:19:53.237 [2024-12-15 02:15:17.843602] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-12-15 02:15:17.886043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-12-15 02:15:17.886289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:53.237 [2024-12-15 02:15:17.886315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-12-15 02:15:17.886327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-12-15 02:15:17.886458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-12-15 02:15:17.886471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:53.237 [2024-12-15 02:15:17.886480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-12-15 02:15:17.886489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-12-15 02:15:17.886549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-12-15 02:15:17.886563] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:53.237 [2024-12-15 02:15:17.886571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-12-15 02:15:17.886579] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-12-15 02:15:17.886598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-12-15 02:15:17.886611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:53.237 [2024-12-15 02:15:17.886620] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-12-15 02:15:17.886628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.237 [2024-12-15 02:15:17.960387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.237 [2024-12-15 02:15:17.960431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:53.237 [2024-12-15 02:15:17.960442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.237 [2024-12-15 02:15:17.960450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.498 [2024-12-15 02:15:18.013356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.498 [2024-12-15 02:15:18.013685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:53.498 [2024-12-15 02:15:18.013700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.498 [2024-12-15 02:15:18.013707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.498 [2024-12-15 02:15:18.013774] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.498 [2024-12-15 02:15:18.013782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:53.498 [2024-12-15 02:15:18.013789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.498 [2024-12-15 02:15:18.013795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.498 [2024-12-15 02:15:18.013821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.498 [2024-12-15 02:15:18.013828] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:53.498 [2024-12-15 02:15:18.013837] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.498 [2024-12-15 02:15:18.013845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.498 [2024-12-15 02:15:18.013924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.498 [2024-12-15 02:15:18.013933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:53.498 [2024-12-15 02:15:18.013940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.498 [2024-12-15 02:15:18.013946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.498 [2024-12-15 02:15:18.013974] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.498 [2024-12-15 02:15:18.013982] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:53.498 [2024-12-15 02:15:18.013989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.498 [2024-12-15 02:15:18.013997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.498 [2024-12-15 02:15:18.014032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.498 [2024-12-15 02:15:18.014041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:53.498 [2024-12-15 02:15:18.014048] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.498 [2024-12-15 02:15:18.014054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.498 [2024-12-15 02:15:18.014094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:53.499 [2024-12-15 02:15:18.014102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:53.499 [2024-12-15 02:15:18.014112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:53.499 [2024-12-15 02:15:18.014118] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:53.499 [2024-12-15 02:15:18.014269] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 360.374 ms, result 0 00:19:54.071 00:19:54.071 00:19:54.071 02:15:18 ftl.ftl_trim -- ftl/trim.sh@72 -- # svcpid=78495 00:19:54.071 02:15:18 ftl.ftl_trim -- ftl/trim.sh@73 -- # waitforlisten 78495 00:19:54.071 02:15:18 ftl.ftl_trim -- ftl/trim.sh@71 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:19:54.071 02:15:18 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78495 ']' 00:19:54.071 02:15:18 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:19:54.071 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:19:54.071 02:15:18 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:19:54.071 02:15:18 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:19:54.071 02:15:18 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:19:54.071 02:15:18 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:19:54.071 [2024-12-15 02:15:18.780483] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:54.071 [2024-12-15 02:15:18.780598] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78495 ] 00:19:54.331 [2024-12-15 02:15:18.935149] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:54.331 [2024-12-15 02:15:19.022500] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:54.903 02:15:19 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:19:54.903 02:15:19 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:19:54.903 02:15:19 ftl.ftl_trim -- ftl/trim.sh@75 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:19:55.164 [2024-12-15 02:15:19.803226] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.164 [2024-12-15 02:15:19.803286] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:55.425 [2024-12-15 02:15:19.976044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.425 [2024-12-15 02:15:19.976088] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:55.425 [2024-12-15 02:15:19.976101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.425 [2024-12-15 02:15:19.976107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.425 [2024-12-15 02:15:19.982538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.425 [2024-12-15 02:15:19.982945] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:55.425 [2024-12-15 02:15:19.983006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.403 ms 00:19:55.425 [2024-12-15 02:15:19.983033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.425 [2024-12-15 02:15:19.983469] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:55.426 [2024-12-15 02:15:19.985797] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:55.426 [2024-12-15 02:15:19.985874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:19.985901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:55.426 [2024-12-15 02:15:19.985931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.441 ms 00:19:55.426 [2024-12-15 02:15:19.985953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:19.988590] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:55.426 [2024-12-15 02:15:20.004760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.004794] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:55.426 [2024-12-15 02:15:20.004805] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.184 ms 00:19:55.426 [2024-12-15 02:15:20.004815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.004965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.004980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:55.426 [2024-12-15 02:15:20.004989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:19:55.426 [2024-12-15 02:15:20.004998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.011707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.011744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:55.426 [2024-12-15 02:15:20.011755] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.658 ms 00:19:55.426 [2024-12-15 02:15:20.011765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.011860] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.011873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:55.426 [2024-12-15 02:15:20.011882] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:19:55.426 [2024-12-15 02:15:20.011895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.011918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.011927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:55.426 [2024-12-15 02:15:20.011935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:19:55.426 [2024-12-15 02:15:20.011944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.011966] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:55.426 [2024-12-15 02:15:20.015662] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.015797] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:55.426 [2024-12-15 02:15:20.015815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.699 ms 00:19:55.426 [2024-12-15 02:15:20.015823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.015891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.015901] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:55.426 [2024-12-15 02:15:20.015912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:55.426 [2024-12-15 02:15:20.015921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.015943] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:55.426 [2024-12-15 02:15:20.015965] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:55.426 [2024-12-15 02:15:20.016013] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:55.426 [2024-12-15 02:15:20.016029] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:55.426 [2024-12-15 02:15:20.016137] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:55.426 [2024-12-15 02:15:20.016148] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:55.426 [2024-12-15 02:15:20.016165] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:55.426 [2024-12-15 02:15:20.016175] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016186] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016210] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:55.426 [2024-12-15 02:15:20.016221] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:55.426 [2024-12-15 02:15:20.016229] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:55.426 [2024-12-15 02:15:20.016243] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:55.426 [2024-12-15 02:15:20.016251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.016260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:55.426 [2024-12-15 02:15:20.016268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.313 ms 00:19:55.426 [2024-12-15 02:15:20.016277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.016366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.426 [2024-12-15 02:15:20.016377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:55.426 [2024-12-15 02:15:20.016384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:55.426 [2024-12-15 02:15:20.016393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.426 [2024-12-15 02:15:20.016493] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:55.426 [2024-12-15 02:15:20.016505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:55.426 [2024-12-15 02:15:20.016514] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:55.426 [2024-12-15 02:15:20.016542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016549] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016559] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:55.426 [2024-12-15 02:15:20.016567] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.426 [2024-12-15 02:15:20.016583] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:55.426 [2024-12-15 02:15:20.016592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:55.426 [2024-12-15 02:15:20.016599] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:55.426 [2024-12-15 02:15:20.016607] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:55.426 [2024-12-15 02:15:20.016613] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:55.426 [2024-12-15 02:15:20.016621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016629] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:55.426 [2024-12-15 02:15:20.016637] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016650] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016664] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:55.426 [2024-12-15 02:15:20.016671] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016680] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016686] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:55.426 [2024-12-15 02:15:20.016696] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016702] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016711] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:55.426 [2024-12-15 02:15:20.016718] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016726] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016732] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:55.426 [2024-12-15 02:15:20.016742] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016748] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016756] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:55.426 [2024-12-15 02:15:20.016763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016771] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.426 [2024-12-15 02:15:20.016777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:55.426 [2024-12-15 02:15:20.016786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:55.426 [2024-12-15 02:15:20.016792] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:55.426 [2024-12-15 02:15:20.016800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:55.426 [2024-12-15 02:15:20.016807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:55.426 [2024-12-15 02:15:20.016817] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016824] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:55.426 [2024-12-15 02:15:20.016832] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:55.426 [2024-12-15 02:15:20.016838] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016847] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:55.426 [2024-12-15 02:15:20.016856] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:55.426 [2024-12-15 02:15:20.016865] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:55.426 [2024-12-15 02:15:20.016873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:55.426 [2024-12-15 02:15:20.016882] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:55.426 [2024-12-15 02:15:20.016888] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:55.426 [2024-12-15 02:15:20.016896] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:55.427 [2024-12-15 02:15:20.016903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:55.427 [2024-12-15 02:15:20.016912] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:55.427 [2024-12-15 02:15:20.016920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:55.427 [2024-12-15 02:15:20.016930] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:55.427 [2024-12-15 02:15:20.016939] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.427 [2024-12-15 02:15:20.016952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:55.427 [2024-12-15 02:15:20.016960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:55.427 [2024-12-15 02:15:20.016969] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:55.427 [2024-12-15 02:15:20.016976] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:55.427 [2024-12-15 02:15:20.016985] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:55.427 [2024-12-15 02:15:20.016991] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:55.427 [2024-12-15 02:15:20.017000] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:55.427 [2024-12-15 02:15:20.017007] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:55.427 [2024-12-15 02:15:20.017016] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:55.427 [2024-12-15 02:15:20.017023] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:55.427 [2024-12-15 02:15:20.017032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:55.427 [2024-12-15 02:15:20.017039] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:55.427 [2024-12-15 02:15:20.017047] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:55.427 [2024-12-15 02:15:20.017055] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:55.427 [2024-12-15 02:15:20.017064] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:55.427 [2024-12-15 02:15:20.017073] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:55.427 [2024-12-15 02:15:20.017085] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:55.427 [2024-12-15 02:15:20.017093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:55.427 [2024-12-15 02:15:20.017102] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:55.427 [2024-12-15 02:15:20.017109] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:55.427 [2024-12-15 02:15:20.017118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.017126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:55.427 [2024-12-15 02:15:20.017135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:19:55.427 [2024-12-15 02:15:20.017144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.047018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.047153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:55.427 [2024-12-15 02:15:20.047236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 29.803 ms 00:19:55.427 [2024-12-15 02:15:20.047263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.047412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.047867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:55.427 [2024-12-15 02:15:20.047919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:55.427 [2024-12-15 02:15:20.047942] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.080924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.081038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:55.427 [2024-12-15 02:15:20.081090] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.930 ms 00:19:55.427 [2024-12-15 02:15:20.081114] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.081241] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.081272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:55.427 [2024-12-15 02:15:20.081295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:55.427 [2024-12-15 02:15:20.081315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.081733] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.081770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:55.427 [2024-12-15 02:15:20.081795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.384 ms 00:19:55.427 [2024-12-15 02:15:20.081814] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.081965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.082010] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:55.427 [2024-12-15 02:15:20.082033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:19:55.427 [2024-12-15 02:15:20.082051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.097979] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.098081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:55.427 [2024-12-15 02:15:20.098131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.854 ms 00:19:55.427 [2024-12-15 02:15:20.098153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.130274] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:55.427 [2024-12-15 02:15:20.130425] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:55.427 [2024-12-15 02:15:20.130494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.130517] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:55.427 [2024-12-15 02:15:20.130541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.201 ms 00:19:55.427 [2024-12-15 02:15:20.130567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.155263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.156793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:55.427 [2024-12-15 02:15:20.156860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.609 ms 00:19:55.427 [2024-12-15 02:15:20.156883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.168867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.168973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:55.427 [2024-12-15 02:15:20.169027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.821 ms 00:19:55.427 [2024-12-15 02:15:20.169049] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.180568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.180672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:55.427 [2024-12-15 02:15:20.180723] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.445 ms 00:19:55.427 [2024-12-15 02:15:20.180745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.427 [2024-12-15 02:15:20.181688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.427 [2024-12-15 02:15:20.181810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:55.427 [2024-12-15 02:15:20.181833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:19:55.427 [2024-12-15 02:15:20.181842] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.242388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.688 [2024-12-15 02:15:20.242428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:55.688 [2024-12-15 02:15:20.242444] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 60.498 ms 00:19:55.688 [2024-12-15 02:15:20.242452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.253428] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:55.688 [2024-12-15 02:15:20.270523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.688 [2024-12-15 02:15:20.270565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:55.688 [2024-12-15 02:15:20.270580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.999 ms 00:19:55.688 [2024-12-15 02:15:20.270590] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.270669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.688 [2024-12-15 02:15:20.270682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:55.688 [2024-12-15 02:15:20.270691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:55.688 [2024-12-15 02:15:20.270700] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.270754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.688 [2024-12-15 02:15:20.270766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:55.688 [2024-12-15 02:15:20.270774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:19:55.688 [2024-12-15 02:15:20.270786] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.270810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.688 [2024-12-15 02:15:20.270821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:55.688 [2024-12-15 02:15:20.270828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:55.688 [2024-12-15 02:15:20.270840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.270875] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:55.688 [2024-12-15 02:15:20.270890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.688 [2024-12-15 02:15:20.270900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:55.688 [2024-12-15 02:15:20.270910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:55.688 [2024-12-15 02:15:20.270917] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.295291] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.688 [2024-12-15 02:15:20.295325] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:55.688 [2024-12-15 02:15:20.295340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.344 ms 00:19:55.688 [2024-12-15 02:15:20.295348] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.295440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.688 [2024-12-15 02:15:20.295451] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:55.688 [2024-12-15 02:15:20.295462] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:19:55.688 [2024-12-15 02:15:20.295472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.688 [2024-12-15 02:15:20.296426] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:55.688 [2024-12-15 02:15:20.299582] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 320.049 ms, result 0 00:19:55.688 [2024-12-15 02:15:20.301530] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:55.688 Some configs were skipped because the RPC state that can call them passed over. 00:19:55.688 02:15:20 ftl.ftl_trim -- ftl/trim.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:19:55.949 [2024-12-15 02:15:20.537785] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:55.949 [2024-12-15 02:15:20.537984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:55.949 [2024-12-15 02:15:20.538058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.174 ms 00:19:55.949 [2024-12-15 02:15:20.538085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:55.949 [2024-12-15 02:15:20.538143] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.534 ms, result 0 00:19:55.949 true 00:19:55.949 02:15:20 ftl.ftl_trim -- ftl/trim.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:19:56.210 [2024-12-15 02:15:20.753637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:56.210 [2024-12-15 02:15:20.753816] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:19:56.210 [2024-12-15 02:15:20.753886] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.726 ms 00:19:56.210 [2024-12-15 02:15:20.753911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:56.210 [2024-12-15 02:15:20.753971] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 3.060 ms, result 0 00:19:56.210 true 00:19:56.210 02:15:20 ftl.ftl_trim -- ftl/trim.sh@81 -- # killprocess 78495 00:19:56.210 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78495 ']' 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78495 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78495 00:19:56.211 killing process with pid 78495 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78495' 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78495 00:19:56.211 02:15:20 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78495 00:19:57.162 [2024-12-15 02:15:21.594552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.594604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:19:57.162 [2024-12-15 02:15:21.594616] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:19:57.162 [2024-12-15 02:15:21.594624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.594645] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:19:57.162 [2024-12-15 02:15:21.596878] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.596904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:19:57.162 [2024-12-15 02:15:21.596916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.217 ms 00:19:57.162 [2024-12-15 02:15:21.596923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.597155] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.597164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:19:57.162 [2024-12-15 02:15:21.597173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.209 ms 00:19:57.162 [2024-12-15 02:15:21.597179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.600778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.600944] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:19:57.162 [2024-12-15 02:15:21.600963] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.582 ms 00:19:57.162 [2024-12-15 02:15:21.600970] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.606207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.606233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:19:57.162 [2024-12-15 02:15:21.606242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.191 ms 00:19:57.162 [2024-12-15 02:15:21.606248] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.614496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.614538] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:19:57.162 [2024-12-15 02:15:21.614550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.205 ms 00:19:57.162 [2024-12-15 02:15:21.614556] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.622180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.622217] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:19:57.162 [2024-12-15 02:15:21.622227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.591 ms 00:19:57.162 [2024-12-15 02:15:21.622234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.622354] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.622364] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:19:57.162 [2024-12-15 02:15:21.622372] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:19:57.162 [2024-12-15 02:15:21.622378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.631417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.162 [2024-12-15 02:15:21.631524] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:19:57.162 [2024-12-15 02:15:21.631540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.023 ms 00:19:57.162 [2024-12-15 02:15:21.631545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.162 [2024-12-15 02:15:21.639520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.163 [2024-12-15 02:15:21.639542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:19:57.163 [2024-12-15 02:15:21.639553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.946 ms 00:19:57.163 [2024-12-15 02:15:21.639559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.163 [2024-12-15 02:15:21.647293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.163 [2024-12-15 02:15:21.647316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:19:57.163 [2024-12-15 02:15:21.647325] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.701 ms 00:19:57.163 [2024-12-15 02:15:21.647330] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.163 [2024-12-15 02:15:21.654811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.163 [2024-12-15 02:15:21.654908] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:19:57.163 [2024-12-15 02:15:21.654922] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.430 ms 00:19:57.163 [2024-12-15 02:15:21.654927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.163 [2024-12-15 02:15:21.654964] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:19:57.163 [2024-12-15 02:15:21.654975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.654985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.654991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.654999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655014] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655045] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655053] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655096] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655142] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655184] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655236] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655340] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655346] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655374] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655407] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655420] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655447] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655566] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655625] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655632] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:19:57.163 [2024-12-15 02:15:21.655637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:19:57.164 [2024-12-15 02:15:21.655645] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:19:57.164 [2024-12-15 02:15:21.655651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:19:57.164 [2024-12-15 02:15:21.655658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:19:57.164 [2024-12-15 02:15:21.655676] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:19:57.164 [2024-12-15 02:15:21.655687] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8fe8d93-efde-44df-91f9-97e8d7b46861 00:19:57.164 [2024-12-15 02:15:21.655695] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:19:57.164 [2024-12-15 02:15:21.655703] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:19:57.164 [2024-12-15 02:15:21.655708] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:19:57.164 [2024-12-15 02:15:21.655716] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:19:57.164 [2024-12-15 02:15:21.655722] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:19:57.164 [2024-12-15 02:15:21.655730] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:19:57.164 [2024-12-15 02:15:21.655736] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:19:57.164 [2024-12-15 02:15:21.655742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:19:57.164 [2024-12-15 02:15:21.655747] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:19:57.164 [2024-12-15 02:15:21.655753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.164 [2024-12-15 02:15:21.655759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:19:57.164 [2024-12-15 02:15:21.655767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.791 ms 00:19:57.164 [2024-12-15 02:15:21.655774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.666347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.164 [2024-12-15 02:15:21.666440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:19:57.164 [2024-12-15 02:15:21.666489] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.553 ms 00:19:57.164 [2024-12-15 02:15:21.666507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.666823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:57.164 [2024-12-15 02:15:21.666855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:19:57.164 [2024-12-15 02:15:21.666905] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.268 ms 00:19:57.164 [2024-12-15 02:15:21.666922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.704004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.704098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:57.164 [2024-12-15 02:15:21.704140] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.704157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.704259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.704280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:57.164 [2024-12-15 02:15:21.704301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.704315] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.704363] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.704382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:57.164 [2024-12-15 02:15:21.704402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.704464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.704494] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.704510] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:57.164 [2024-12-15 02:15:21.704528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.704544] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.768467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.768590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:57.164 [2024-12-15 02:15:21.768633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.768651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.819470] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.819590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:57.164 [2024-12-15 02:15:21.819633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.819654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.819739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.819758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:57.164 [2024-12-15 02:15:21.819778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.819794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.819828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.819845] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:57.164 [2024-12-15 02:15:21.819862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.819921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.820018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.820038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:57.164 [2024-12-15 02:15:21.820054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.820069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.820108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.820126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:19:57.164 [2024-12-15 02:15:21.820143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.820157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.820213] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.820614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:57.164 [2024-12-15 02:15:21.820688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.820709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.820779] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:19:57.164 [2024-12-15 02:15:21.820907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:57.164 [2024-12-15 02:15:21.820929] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:19:57.164 [2024-12-15 02:15:21.820945] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:57.164 [2024-12-15 02:15:21.821089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 226.508 ms, result 0 00:19:57.735 02:15:22 ftl.ftl_trim -- ftl/trim.sh@84 -- # file=/home/vagrant/spdk_repo/spdk/test/ftl/data 00:19:57.735 02:15:22 ftl.ftl_trim -- ftl/trim.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:19:57.735 [2024-12-15 02:15:22.470315] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:19:57.735 [2024-12-15 02:15:22.470564] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78547 ] 00:19:57.995 [2024-12-15 02:15:22.629921] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:19:57.995 [2024-12-15 02:15:22.728604] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:19:58.255 [2024-12-15 02:15:22.993865] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.255 [2024-12-15 02:15:22.994130] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:19:58.517 [2024-12-15 02:15:23.156125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.517 [2024-12-15 02:15:23.156393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:19:58.517 [2024-12-15 02:15:23.156610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:19:58.517 [2024-12-15 02:15:23.156635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.517 [2024-12-15 02:15:23.159673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.517 [2024-12-15 02:15:23.159854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:19:58.517 [2024-12-15 02:15:23.159875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.000 ms 00:19:58.517 [2024-12-15 02:15:23.159884] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.517 [2024-12-15 02:15:23.160143] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:19:58.517 [2024-12-15 02:15:23.161323] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:19:58.517 [2024-12-15 02:15:23.161381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.517 [2024-12-15 02:15:23.161392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:19:58.517 [2024-12-15 02:15:23.161403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.253 ms 00:19:58.517 [2024-12-15 02:15:23.161411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.517 [2024-12-15 02:15:23.163245] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:19:58.517 [2024-12-15 02:15:23.177639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.517 [2024-12-15 02:15:23.177702] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:19:58.517 [2024-12-15 02:15:23.177716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.396 ms 00:19:58.517 [2024-12-15 02:15:23.177725] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.517 [2024-12-15 02:15:23.177854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.517 [2024-12-15 02:15:23.177867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:19:58.517 [2024-12-15 02:15:23.177877] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:19:58.517 [2024-12-15 02:15:23.177885] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.517 [2024-12-15 02:15:23.186288] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.517 [2024-12-15 02:15:23.186331] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:19:58.517 [2024-12-15 02:15:23.186341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.357 ms 00:19:58.517 [2024-12-15 02:15:23.186349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.517 [2024-12-15 02:15:23.186464] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.517 [2024-12-15 02:15:23.186476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:19:58.517 [2024-12-15 02:15:23.186485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:19:58.517 [2024-12-15 02:15:23.186493] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.517 [2024-12-15 02:15:23.186523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.517 [2024-12-15 02:15:23.186531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:19:58.517 [2024-12-15 02:15:23.186540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:19:58.517 [2024-12-15 02:15:23.186547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.517 [2024-12-15 02:15:23.186569] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:19:58.518 [2024-12-15 02:15:23.190715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.518 [2024-12-15 02:15:23.190756] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:19:58.518 [2024-12-15 02:15:23.190766] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.152 ms 00:19:58.518 [2024-12-15 02:15:23.190774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.518 [2024-12-15 02:15:23.190853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.518 [2024-12-15 02:15:23.190864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:19:58.518 [2024-12-15 02:15:23.190875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:19:58.518 [2024-12-15 02:15:23.190883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.518 [2024-12-15 02:15:23.190909] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:19:58.518 [2024-12-15 02:15:23.190934] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:19:58.518 [2024-12-15 02:15:23.190970] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:19:58.518 [2024-12-15 02:15:23.190987] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:19:58.518 [2024-12-15 02:15:23.191094] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:19:58.518 [2024-12-15 02:15:23.191106] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:19:58.518 [2024-12-15 02:15:23.191117] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:19:58.518 [2024-12-15 02:15:23.191132] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191141] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191150] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:19:58.518 [2024-12-15 02:15:23.191159] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:19:58.518 [2024-12-15 02:15:23.191166] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:19:58.518 [2024-12-15 02:15:23.191174] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:19:58.518 [2024-12-15 02:15:23.191183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.518 [2024-12-15 02:15:23.191190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:19:58.518 [2024-12-15 02:15:23.191222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:19:58.518 [2024-12-15 02:15:23.191230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.518 [2024-12-15 02:15:23.191320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.518 [2024-12-15 02:15:23.191333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:19:58.518 [2024-12-15 02:15:23.191341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:19:58.518 [2024-12-15 02:15:23.191351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.518 [2024-12-15 02:15:23.191452] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:19:58.518 [2024-12-15 02:15:23.191463] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:19:58.518 [2024-12-15 02:15:23.191471] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191488] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:19:58.518 [2024-12-15 02:15:23.191496] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191503] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191510] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:19:58.518 [2024-12-15 02:15:23.191517] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191524] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.518 [2024-12-15 02:15:23.191531] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:19:58.518 [2024-12-15 02:15:23.191546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:19:58.518 [2024-12-15 02:15:23.191553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:19:58.518 [2024-12-15 02:15:23.191561] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:19:58.518 [2024-12-15 02:15:23.191568] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:19:58.518 [2024-12-15 02:15:23.191574] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191582] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:19:58.518 [2024-12-15 02:15:23.191589] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191603] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:19:58.518 [2024-12-15 02:15:23.191610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191623] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:19:58.518 [2024-12-15 02:15:23.191629] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191635] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:19:58.518 [2024-12-15 02:15:23.191649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191656] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191663] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:19:58.518 [2024-12-15 02:15:23.191669] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191676] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191682] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:19:58.518 [2024-12-15 02:15:23.191689] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.518 [2024-12-15 02:15:23.191706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:19:58.518 [2024-12-15 02:15:23.191712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:19:58.518 [2024-12-15 02:15:23.191719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:19:58.518 [2024-12-15 02:15:23.191725] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:19:58.518 [2024-12-15 02:15:23.191732] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:19:58.518 [2024-12-15 02:15:23.191739] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191745] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:19:58.518 [2024-12-15 02:15:23.191752] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:19:58.518 [2024-12-15 02:15:23.191759] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191765] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:19:58.518 [2024-12-15 02:15:23.191773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:19:58.518 [2024-12-15 02:15:23.191784] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191791] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:19:58.518 [2024-12-15 02:15:23.191799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:19:58.518 [2024-12-15 02:15:23.191807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:19:58.518 [2024-12-15 02:15:23.191813] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:19:58.518 [2024-12-15 02:15:23.191821] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:19:58.518 [2024-12-15 02:15:23.191827] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:19:58.518 [2024-12-15 02:15:23.191834] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:19:58.518 [2024-12-15 02:15:23.191843] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:19:58.518 [2024-12-15 02:15:23.191853] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.518 [2024-12-15 02:15:23.191862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:19:58.518 [2024-12-15 02:15:23.191869] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:19:58.518 [2024-12-15 02:15:23.191877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:19:58.518 [2024-12-15 02:15:23.191884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:19:58.518 [2024-12-15 02:15:23.191892] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:19:58.518 [2024-12-15 02:15:23.191899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:19:58.518 [2024-12-15 02:15:23.191906] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:19:58.518 [2024-12-15 02:15:23.191914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:19:58.518 [2024-12-15 02:15:23.191921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:19:58.518 [2024-12-15 02:15:23.191928] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:19:58.518 [2024-12-15 02:15:23.191938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:19:58.518 [2024-12-15 02:15:23.191945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:19:58.518 [2024-12-15 02:15:23.191952] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:19:58.518 [2024-12-15 02:15:23.191960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:19:58.518 [2024-12-15 02:15:23.191968] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:19:58.518 [2024-12-15 02:15:23.191976] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:19:58.518 [2024-12-15 02:15:23.191984] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:19:58.519 [2024-12-15 02:15:23.191992] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:19:58.519 [2024-12-15 02:15:23.192000] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:19:58.519 [2024-12-15 02:15:23.192008] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:19:58.519 [2024-12-15 02:15:23.192016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.519 [2024-12-15 02:15:23.192027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:19:58.519 [2024-12-15 02:15:23.192035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.634 ms 00:19:58.519 [2024-12-15 02:15:23.192043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.519 [2024-12-15 02:15:23.224530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.519 [2024-12-15 02:15:23.224579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:19:58.519 [2024-12-15 02:15:23.224591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.431 ms 00:19:58.519 [2024-12-15 02:15:23.224599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.519 [2024-12-15 02:15:23.224739] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.519 [2024-12-15 02:15:23.224751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:19:58.519 [2024-12-15 02:15:23.224760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.064 ms 00:19:58.519 [2024-12-15 02:15:23.224768] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.519 [2024-12-15 02:15:23.271924] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.519 [2024-12-15 02:15:23.271979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:19:58.519 [2024-12-15 02:15:23.271997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.131 ms 00:19:58.519 [2024-12-15 02:15:23.272006] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.519 [2024-12-15 02:15:23.272125] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.519 [2024-12-15 02:15:23.272138] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:19:58.519 [2024-12-15 02:15:23.272148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:19:58.519 [2024-12-15 02:15:23.272156] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.519 [2024-12-15 02:15:23.272778] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.519 [2024-12-15 02:15:23.272817] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:19:58.519 [2024-12-15 02:15:23.272828] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.596 ms 00:19:58.519 [2024-12-15 02:15:23.272844] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.519 [2024-12-15 02:15:23.273004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.519 [2024-12-15 02:15:23.273021] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:19:58.519 [2024-12-15 02:15:23.273031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:19:58.519 [2024-12-15 02:15:23.273039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.780 [2024-12-15 02:15:23.289842] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.780 [2024-12-15 02:15:23.289887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:19:58.780 [2024-12-15 02:15:23.289898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.781 ms 00:19:58.780 [2024-12-15 02:15:23.289907] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.780 [2024-12-15 02:15:23.304530] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:19:58.780 [2024-12-15 02:15:23.304728] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:19:58.780 [2024-12-15 02:15:23.304749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.780 [2024-12-15 02:15:23.304758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:19:58.780 [2024-12-15 02:15:23.304768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.719 ms 00:19:58.780 [2024-12-15 02:15:23.304775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.780 [2024-12-15 02:15:23.330994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.780 [2024-12-15 02:15:23.331048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:19:58.780 [2024-12-15 02:15:23.331062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.039 ms 00:19:58.780 [2024-12-15 02:15:23.331071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.780 [2024-12-15 02:15:23.344645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.780 [2024-12-15 02:15:23.344694] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:19:58.780 [2024-12-15 02:15:23.344707] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.466 ms 00:19:58.780 [2024-12-15 02:15:23.344715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.780 [2024-12-15 02:15:23.357517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.780 [2024-12-15 02:15:23.357564] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:19:58.780 [2024-12-15 02:15:23.357576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.707 ms 00:19:58.780 [2024-12-15 02:15:23.357585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.780 [2024-12-15 02:15:23.358283] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.780 [2024-12-15 02:15:23.358447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:19:58.780 [2024-12-15 02:15:23.358466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.574 ms 00:19:58.780 [2024-12-15 02:15:23.358475] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.780 [2024-12-15 02:15:23.425501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.780 [2024-12-15 02:15:23.425728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:19:58.780 [2024-12-15 02:15:23.425756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.987 ms 00:19:58.780 [2024-12-15 02:15:23.425765] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.780 [2024-12-15 02:15:23.437762] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:19:58.780 [2024-12-15 02:15:23.459257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.780 [2024-12-15 02:15:23.459309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:19:58.780 [2024-12-15 02:15:23.459324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.382 ms 00:19:58.780 [2024-12-15 02:15:23.459341] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.781 [2024-12-15 02:15:23.459449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.781 [2024-12-15 02:15:23.459461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:19:58.781 [2024-12-15 02:15:23.459470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:19:58.781 [2024-12-15 02:15:23.459479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.781 [2024-12-15 02:15:23.459540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.781 [2024-12-15 02:15:23.459550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:19:58.781 [2024-12-15 02:15:23.459560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:19:58.781 [2024-12-15 02:15:23.459577] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.781 [2024-12-15 02:15:23.459610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.781 [2024-12-15 02:15:23.459620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:19:58.781 [2024-12-15 02:15:23.459629] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:19:58.781 [2024-12-15 02:15:23.459637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.781 [2024-12-15 02:15:23.459677] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:19:58.781 [2024-12-15 02:15:23.459689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.781 [2024-12-15 02:15:23.459698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:19:58.781 [2024-12-15 02:15:23.459706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:19:58.781 [2024-12-15 02:15:23.459714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.781 [2024-12-15 02:15:23.486320] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.781 [2024-12-15 02:15:23.486392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:19:58.781 [2024-12-15 02:15:23.486408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.583 ms 00:19:58.781 [2024-12-15 02:15:23.486417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.781 [2024-12-15 02:15:23.486555] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:19:58.781 [2024-12-15 02:15:23.486568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:19:58.781 [2024-12-15 02:15:23.486579] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:19:58.781 [2024-12-15 02:15:23.486587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:19:58.781 [2024-12-15 02:15:23.487735] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:19:58.781 [2024-12-15 02:15:23.491348] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 331.283 ms, result 0 00:19:58.781 [2024-12-15 02:15:23.492723] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:19:58.781 [2024-12-15 02:15:23.506461] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:00.167  [2024-12-15T02:15:25.873Z] Copying: 14/256 [MB] (14 MBps) [2024-12-15T02:15:26.813Z] Copying: 32/256 [MB] (18 MBps) [2024-12-15T02:15:27.752Z] Copying: 47/256 [MB] (15 MBps) [2024-12-15T02:15:28.691Z] Copying: 63/256 [MB] (15 MBps) [2024-12-15T02:15:29.678Z] Copying: 76/256 [MB] (12 MBps) [2024-12-15T02:15:30.641Z] Copying: 92/256 [MB] (16 MBps) [2024-12-15T02:15:31.580Z] Copying: 110/256 [MB] (17 MBps) [2024-12-15T02:15:32.523Z] Copying: 122/256 [MB] (12 MBps) [2024-12-15T02:15:33.911Z] Copying: 142/256 [MB] (19 MBps) [2024-12-15T02:15:34.857Z] Copying: 156/256 [MB] (14 MBps) [2024-12-15T02:15:35.800Z] Copying: 169/256 [MB] (12 MBps) [2024-12-15T02:15:36.746Z] Copying: 188/256 [MB] (19 MBps) [2024-12-15T02:15:37.689Z] Copying: 201/256 [MB] (12 MBps) [2024-12-15T02:15:38.633Z] Copying: 219/256 [MB] (18 MBps) [2024-12-15T02:15:39.577Z] Copying: 239/256 [MB] (20 MBps) [2024-12-15T02:15:39.840Z] Copying: 253/256 [MB] (14 MBps) [2024-12-15T02:15:39.840Z] Copying: 256/256 [MB] (average 15 MBps)[2024-12-15 02:15:39.598128] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:15.075 [2024-12-15 02:15:39.608529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.608729] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:15.075 [2024-12-15 02:15:39.608761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:15.075 [2024-12-15 02:15:39.608770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.608803] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:15.075 [2024-12-15 02:15:39.611772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.611928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:15.075 [2024-12-15 02:15:39.611948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.953 ms 00:20:15.075 [2024-12-15 02:15:39.611957] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.612246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.612258] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:15.075 [2024-12-15 02:15:39.612268] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.259 ms 00:20:15.075 [2024-12-15 02:15:39.612276] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.615978] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.615999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:15.075 [2024-12-15 02:15:39.616010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.683 ms 00:20:15.075 [2024-12-15 02:15:39.616018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.623037] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.623206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:15.075 [2024-12-15 02:15:39.623227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.000 ms 00:20:15.075 [2024-12-15 02:15:39.623237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.649019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.649066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:15.075 [2024-12-15 02:15:39.649079] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.718 ms 00:20:15.075 [2024-12-15 02:15:39.649086] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.665571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.665616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:15.075 [2024-12-15 02:15:39.665636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.436 ms 00:20:15.075 [2024-12-15 02:15:39.665644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.665799] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.665810] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:15.075 [2024-12-15 02:15:39.665829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:20:15.075 [2024-12-15 02:15:39.665836] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.691556] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.691602] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:15.075 [2024-12-15 02:15:39.691614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.701 ms 00:20:15.075 [2024-12-15 02:15:39.691621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.716577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.716622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:15.075 [2024-12-15 02:15:39.716634] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.893 ms 00:20:15.075 [2024-12-15 02:15:39.716641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.741428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.741610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:15.075 [2024-12-15 02:15:39.741630] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.740 ms 00:20:15.075 [2024-12-15 02:15:39.741637] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.766071] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.075 [2024-12-15 02:15:39.766113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:15.075 [2024-12-15 02:15:39.766125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.304 ms 00:20:15.075 [2024-12-15 02:15:39.766131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.075 [2024-12-15 02:15:39.766177] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:15.075 [2024-12-15 02:15:39.766193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766270] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:15.075 [2024-12-15 02:15:39.766285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766300] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766372] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766493] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766598] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766612] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766649] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766739] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766746] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766789] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766825] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766924] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:15.076 [2024-12-15 02:15:39.766947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:15.077 [2024-12-15 02:15:39.766959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:15.077 [2024-12-15 02:15:39.766966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:15.077 [2024-12-15 02:15:39.766974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:15.077 [2024-12-15 02:15:39.766990] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:15.077 [2024-12-15 02:15:39.766998] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8fe8d93-efde-44df-91f9-97e8d7b46861 00:20:15.077 [2024-12-15 02:15:39.767006] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:15.077 [2024-12-15 02:15:39.767014] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:15.077 [2024-12-15 02:15:39.767021] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:15.077 [2024-12-15 02:15:39.767029] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:15.077 [2024-12-15 02:15:39.767036] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:15.077 [2024-12-15 02:15:39.767044] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:15.077 [2024-12-15 02:15:39.767055] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:15.077 [2024-12-15 02:15:39.767061] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:15.077 [2024-12-15 02:15:39.767068] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:15.077 [2024-12-15 02:15:39.767074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.077 [2024-12-15 02:15:39.767082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:15.077 [2024-12-15 02:15:39.767093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.898 ms 00:20:15.077 [2024-12-15 02:15:39.767100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.077 [2024-12-15 02:15:39.780898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.077 [2024-12-15 02:15:39.781059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:15.077 [2024-12-15 02:15:39.781077] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.766 ms 00:20:15.077 [2024-12-15 02:15:39.781085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.077 [2024-12-15 02:15:39.781530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:15.077 [2024-12-15 02:15:39.781543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:15.077 [2024-12-15 02:15:39.781553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.398 ms 00:20:15.077 [2024-12-15 02:15:39.781560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.077 [2024-12-15 02:15:39.820264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.077 [2024-12-15 02:15:39.820308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:15.077 [2024-12-15 02:15:39.820320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.077 [2024-12-15 02:15:39.820335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.077 [2024-12-15 02:15:39.820421] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.077 [2024-12-15 02:15:39.820432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:15.077 [2024-12-15 02:15:39.820439] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.077 [2024-12-15 02:15:39.820447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.077 [2024-12-15 02:15:39.820504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.077 [2024-12-15 02:15:39.820513] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:15.077 [2024-12-15 02:15:39.820521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.077 [2024-12-15 02:15:39.820528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.077 [2024-12-15 02:15:39.820548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.077 [2024-12-15 02:15:39.820557] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:15.077 [2024-12-15 02:15:39.820566] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.077 [2024-12-15 02:15:39.820573] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.904920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.339 [2024-12-15 02:15:39.904973] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:15.339 [2024-12-15 02:15:39.904989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.339 [2024-12-15 02:15:39.904998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.974527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.339 [2024-12-15 02:15:39.974582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:15.339 [2024-12-15 02:15:39.974595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.339 [2024-12-15 02:15:39.974605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.974666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.339 [2024-12-15 02:15:39.974676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:15.339 [2024-12-15 02:15:39.974686] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.339 [2024-12-15 02:15:39.974695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.974727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.339 [2024-12-15 02:15:39.974743] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:15.339 [2024-12-15 02:15:39.974752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.339 [2024-12-15 02:15:39.974761] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.974858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.339 [2024-12-15 02:15:39.974868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:15.339 [2024-12-15 02:15:39.974878] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.339 [2024-12-15 02:15:39.974886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.974921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.339 [2024-12-15 02:15:39.974931] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:15.339 [2024-12-15 02:15:39.974942] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.339 [2024-12-15 02:15:39.974951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.974995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.339 [2024-12-15 02:15:39.975006] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:15.339 [2024-12-15 02:15:39.975014] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.339 [2024-12-15 02:15:39.975023] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.975072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:15.339 [2024-12-15 02:15:39.975086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:15.339 [2024-12-15 02:15:39.975095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:15.339 [2024-12-15 02:15:39.975104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:15.339 [2024-12-15 02:15:39.975294] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 366.761 ms, result 0 00:20:15.908 00:20:15.908 00:20:15.908 02:15:40 ftl.ftl_trim -- ftl/trim.sh@86 -- # cmp --bytes=4194304 /home/vagrant/spdk_repo/spdk/test/ftl/data /dev/zero 00:20:15.908 02:15:40 ftl.ftl_trim -- ftl/trim.sh@87 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:16.481 02:15:41 ftl.ftl_trim -- ftl/trim.sh@90 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/random_pattern --ob=ftl0 --count=1024 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:16.741 [2024-12-15 02:15:41.254338] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:16.741 [2024-12-15 02:15:41.254485] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78746 ] 00:20:16.741 [2024-12-15 02:15:41.415044] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:16.741 [2024-12-15 02:15:41.499282] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:16.999 [2024-12-15 02:15:41.709040] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:16.999 [2024-12-15 02:15:41.709092] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:17.258 [2024-12-15 02:15:41.856723] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.258 [2024-12-15 02:15:41.856759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:17.258 [2024-12-15 02:15:41.856768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.258 [2024-12-15 02:15:41.856775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.258 [2024-12-15 02:15:41.858882] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.258 [2024-12-15 02:15:41.858911] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.258 [2024-12-15 02:15:41.858919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.091 ms 00:20:17.258 [2024-12-15 02:15:41.858924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.258 [2024-12-15 02:15:41.858981] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:17.258 [2024-12-15 02:15:41.859585] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:17.258 [2024-12-15 02:15:41.859642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.258 [2024-12-15 02:15:41.859649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.258 [2024-12-15 02:15:41.859656] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:20:17.259 [2024-12-15 02:15:41.859662] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.860734] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:17.259 [2024-12-15 02:15:41.870220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.870330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:17.259 [2024-12-15 02:15:41.870343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.487 ms 00:20:17.259 [2024-12-15 02:15:41.870351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.870412] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.870421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:17.259 [2024-12-15 02:15:41.870427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:17.259 [2024-12-15 02:15:41.870433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.874678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.874701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.259 [2024-12-15 02:15:41.874708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.215 ms 00:20:17.259 [2024-12-15 02:15:41.874713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.874787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.874795] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.259 [2024-12-15 02:15:41.874801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:20:17.259 [2024-12-15 02:15:41.874807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.874823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.874830] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:17.259 [2024-12-15 02:15:41.874836] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:17.259 [2024-12-15 02:15:41.874841] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.874858] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:17.259 [2024-12-15 02:15:41.877492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.877593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.259 [2024-12-15 02:15:41.877605] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.638 ms 00:20:17.259 [2024-12-15 02:15:41.877612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.877644] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.877650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:17.259 [2024-12-15 02:15:41.877657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:17.259 [2024-12-15 02:15:41.877663] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.877678] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:17.259 [2024-12-15 02:15:41.877693] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:17.259 [2024-12-15 02:15:41.877719] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:17.259 [2024-12-15 02:15:41.877730] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:17.259 [2024-12-15 02:15:41.877808] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:17.259 [2024-12-15 02:15:41.877816] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:17.259 [2024-12-15 02:15:41.877824] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:17.259 [2024-12-15 02:15:41.877834] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:17.259 [2024-12-15 02:15:41.877840] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:17.259 [2024-12-15 02:15:41.877847] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:17.259 [2024-12-15 02:15:41.877852] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:17.259 [2024-12-15 02:15:41.877858] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:17.259 [2024-12-15 02:15:41.877863] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:17.259 [2024-12-15 02:15:41.877869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.877875] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:17.259 [2024-12-15 02:15:41.877881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.193 ms 00:20:17.259 [2024-12-15 02:15:41.877886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.877952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.259 [2024-12-15 02:15:41.877960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:17.259 [2024-12-15 02:15:41.877966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:17.259 [2024-12-15 02:15:41.877972] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.259 [2024-12-15 02:15:41.878043] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:17.259 [2024-12-15 02:15:41.878050] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:17.259 [2024-12-15 02:15:41.878056] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.259 [2024-12-15 02:15:41.878062] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878068] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:17.259 [2024-12-15 02:15:41.878073] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:17.259 [2024-12-15 02:15:41.878083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:17.259 [2024-12-15 02:15:41.878088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.259 [2024-12-15 02:15:41.878099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:17.259 [2024-12-15 02:15:41.878109] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:17.259 [2024-12-15 02:15:41.878114] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:17.259 [2024-12-15 02:15:41.878119] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:17.259 [2024-12-15 02:15:41.878124] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:17.259 [2024-12-15 02:15:41.878130] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878135] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:17.259 [2024-12-15 02:15:41.878141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:17.259 [2024-12-15 02:15:41.878146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:17.259 [2024-12-15 02:15:41.878156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878161] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.259 [2024-12-15 02:15:41.878166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:17.259 [2024-12-15 02:15:41.878171] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.259 [2024-12-15 02:15:41.878180] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:17.259 [2024-12-15 02:15:41.878186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878191] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.259 [2024-12-15 02:15:41.878211] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:17.259 [2024-12-15 02:15:41.878217] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878221] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:17.259 [2024-12-15 02:15:41.878227] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:17.259 [2024-12-15 02:15:41.878232] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878237] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.259 [2024-12-15 02:15:41.878242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:17.259 [2024-12-15 02:15:41.878247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:17.259 [2024-12-15 02:15:41.878252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:17.259 [2024-12-15 02:15:41.878256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:17.259 [2024-12-15 02:15:41.878262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:17.259 [2024-12-15 02:15:41.878267] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878272] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:17.259 [2024-12-15 02:15:41.878277] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:17.259 [2024-12-15 02:15:41.878282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.259 [2024-12-15 02:15:41.878288] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:17.260 [2024-12-15 02:15:41.878294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:17.260 [2024-12-15 02:15:41.878301] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:17.260 [2024-12-15 02:15:41.878307] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:17.260 [2024-12-15 02:15:41.878313] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:17.260 [2024-12-15 02:15:41.878318] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:17.260 [2024-12-15 02:15:41.878324] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:17.260 [2024-12-15 02:15:41.878329] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:17.260 [2024-12-15 02:15:41.878333] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:17.260 [2024-12-15 02:15:41.878338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:17.260 [2024-12-15 02:15:41.878345] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:17.260 [2024-12-15 02:15:41.878352] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.260 [2024-12-15 02:15:41.878358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:17.260 [2024-12-15 02:15:41.878363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:17.260 [2024-12-15 02:15:41.878369] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:17.260 [2024-12-15 02:15:41.878374] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:17.260 [2024-12-15 02:15:41.878380] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:17.260 [2024-12-15 02:15:41.878385] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:17.260 [2024-12-15 02:15:41.878390] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:17.260 [2024-12-15 02:15:41.878397] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:17.260 [2024-12-15 02:15:41.878402] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:17.260 [2024-12-15 02:15:41.878408] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:17.260 [2024-12-15 02:15:41.878413] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:17.260 [2024-12-15 02:15:41.878418] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:17.260 [2024-12-15 02:15:41.878423] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:17.260 [2024-12-15 02:15:41.878429] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:17.260 [2024-12-15 02:15:41.878434] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:17.260 [2024-12-15 02:15:41.878440] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:17.260 [2024-12-15 02:15:41.878446] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:17.260 [2024-12-15 02:15:41.878451] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:17.260 [2024-12-15 02:15:41.878457] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:17.260 [2024-12-15 02:15:41.878462] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:17.260 [2024-12-15 02:15:41.878467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.878476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:17.260 [2024-12-15 02:15:41.878481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:20:17.260 [2024-12-15 02:15:41.878486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.899145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.899174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.260 [2024-12-15 02:15:41.899182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.619 ms 00:20:17.260 [2024-12-15 02:15:41.899188] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.899301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.899310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:17.260 [2024-12-15 02:15:41.899316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:20:17.260 [2024-12-15 02:15:41.899322] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.938845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.938962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.260 [2024-12-15 02:15:41.938979] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 39.507 ms 00:20:17.260 [2024-12-15 02:15:41.938986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.939043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.939052] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.260 [2024-12-15 02:15:41.939058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:17.260 [2024-12-15 02:15:41.939064] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.939374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.939386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.260 [2024-12-15 02:15:41.939394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.295 ms 00:20:17.260 [2024-12-15 02:15:41.939405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.939504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.939511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.260 [2024-12-15 02:15:41.939517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.080 ms 00:20:17.260 [2024-12-15 02:15:41.939522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.950143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.950246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.260 [2024-12-15 02:15:41.950260] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.605 ms 00:20:17.260 [2024-12-15 02:15:41.950266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.960055] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 1, empty chunks = 3 00:20:17.260 [2024-12-15 02:15:41.960149] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:17.260 [2024-12-15 02:15:41.960211] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.960229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:17.260 [2024-12-15 02:15:41.960269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.855 ms 00:20:17.260 [2024-12-15 02:15:41.960286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.978807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.978891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:17.260 [2024-12-15 02:15:41.978932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.449 ms 00:20:17.260 [2024-12-15 02:15:41.978950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.988054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.988159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:17.260 [2024-12-15 02:15:41.988216] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.824 ms 00:20:17.260 [2024-12-15 02:15:41.988236] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.996641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.996727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:17.260 [2024-12-15 02:15:41.996767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.353 ms 00:20:17.260 [2024-12-15 02:15:41.996784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.260 [2024-12-15 02:15:41.997272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.260 [2024-12-15 02:15:41.997358] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:17.260 [2024-12-15 02:15:41.997396] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.422 ms 00:20:17.260 [2024-12-15 02:15:41.997413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.519 [2024-12-15 02:15:42.040638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.040768] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:17.520 [2024-12-15 02:15:42.040807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.195 ms 00:20:17.520 [2024-12-15 02:15:42.040825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.048873] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:17.520 [2024-12-15 02:15:42.060414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.060529] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:17.520 [2024-12-15 02:15:42.060570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.298 ms 00:20:17.520 [2024-12-15 02:15:42.060592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.060678] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.060700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:17.520 [2024-12-15 02:15:42.060716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:17.520 [2024-12-15 02:15:42.060735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.060781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.060798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:17.520 [2024-12-15 02:15:42.060865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.022 ms 00:20:17.520 [2024-12-15 02:15:42.060887] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.060921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.060939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:17.520 [2024-12-15 02:15:42.060954] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:17.520 [2024-12-15 02:15:42.060968] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.061000] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:17.520 [2024-12-15 02:15:42.061132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.061146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:17.520 [2024-12-15 02:15:42.061161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.132 ms 00:20:17.520 [2024-12-15 02:15:42.061175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.079116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.079215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:17.520 [2024-12-15 02:15:42.079257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.899 ms 00:20:17.520 [2024-12-15 02:15:42.079275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.079345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.079366] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:17.520 [2024-12-15 02:15:42.079382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:17.520 [2024-12-15 02:15:42.079396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.080275] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.520 [2024-12-15 02:15:42.082720] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 223.322 ms, result 0 00:20:17.520 [2024-12-15 02:15:42.083477] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.520 [2024-12-15 02:15:42.094394] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:17.520  [2024-12-15T02:15:42.285Z] Copying: 4096/4096 [kB] (average 47 MBps)[2024-12-15 02:15:42.180956] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:17.520 [2024-12-15 02:15:42.187809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.187897] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:17.520 [2024-12-15 02:15:42.187946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:17.520 [2024-12-15 02:15:42.187964] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.187992] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:17.520 [2024-12-15 02:15:42.190072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.190149] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:17.520 [2024-12-15 02:15:42.190242] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.023 ms 00:20:17.520 [2024-12-15 02:15:42.190261] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.191811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.191896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:17.520 [2024-12-15 02:15:42.191937] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.521 ms 00:20:17.520 [2024-12-15 02:15:42.191955] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.194919] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.194985] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:17.520 [2024-12-15 02:15:42.195022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.937 ms 00:20:17.520 [2024-12-15 02:15:42.195039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.200382] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.200455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:17.520 [2024-12-15 02:15:42.200491] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.313 ms 00:20:17.520 [2024-12-15 02:15:42.200508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.217292] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.217374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:17.520 [2024-12-15 02:15:42.217410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.738 ms 00:20:17.520 [2024-12-15 02:15:42.217425] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.228603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.228693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:17.520 [2024-12-15 02:15:42.228734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.146 ms 00:20:17.520 [2024-12-15 02:15:42.228751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.228853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.228986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:17.520 [2024-12-15 02:15:42.229028] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.067 ms 00:20:17.520 [2024-12-15 02:15:42.229045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.246775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.246854] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:17.520 [2024-12-15 02:15:42.246890] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.708 ms 00:20:17.520 [2024-12-15 02:15:42.246906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.263911] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.263991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:17.520 [2024-12-15 02:15:42.264026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.967 ms 00:20:17.520 [2024-12-15 02:15:42.264042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.520 [2024-12-15 02:15:42.280938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.520 [2024-12-15 02:15:42.281019] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:17.520 [2024-12-15 02:15:42.281054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.854 ms 00:20:17.520 [2024-12-15 02:15:42.281070] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.780 [2024-12-15 02:15:42.297821] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.780 [2024-12-15 02:15:42.297900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:17.780 [2024-12-15 02:15:42.297935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.703 ms 00:20:17.780 [2024-12-15 02:15:42.297951] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.780 [2024-12-15 02:15:42.297981] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:17.780 [2024-12-15 02:15:42.298001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298067] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298128] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298192] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:17.780 [2024-12-15 02:15:42.298343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298411] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.298987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299089] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299112] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299251] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299280] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299325] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299337] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299348] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299354] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299382] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299431] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:17.781 [2024-12-15 02:15:42.299437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:17.782 [2024-12-15 02:15:42.299442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:17.782 [2024-12-15 02:15:42.299448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:17.782 [2024-12-15 02:15:42.299454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:17.782 [2024-12-15 02:15:42.299459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:17.782 [2024-12-15 02:15:42.299472] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:17.782 [2024-12-15 02:15:42.299477] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8fe8d93-efde-44df-91f9-97e8d7b46861 00:20:17.782 [2024-12-15 02:15:42.299483] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:17.782 [2024-12-15 02:15:42.299489] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:17.782 [2024-12-15 02:15:42.299494] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:17.782 [2024-12-15 02:15:42.299500] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:17.782 [2024-12-15 02:15:42.299505] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:17.782 [2024-12-15 02:15:42.299512] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:17.782 [2024-12-15 02:15:42.299519] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:17.782 [2024-12-15 02:15:42.299524] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:17.782 [2024-12-15 02:15:42.299529] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:17.782 [2024-12-15 02:15:42.299534] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.782 [2024-12-15 02:15:42.299540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:17.782 [2024-12-15 02:15:42.299547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.554 ms 00:20:17.782 [2024-12-15 02:15:42.299553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.309168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.782 [2024-12-15 02:15:42.309265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:17.782 [2024-12-15 02:15:42.309352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.600 ms 00:20:17.782 [2024-12-15 02:15:42.309369] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.309649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:17.782 [2024-12-15 02:15:42.309714] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:17.782 [2024-12-15 02:15:42.309752] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.243 ms 00:20:17.782 [2024-12-15 02:15:42.309769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.337380] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.337462] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:17.782 [2024-12-15 02:15:42.337498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.337518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.337575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.337700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:17.782 [2024-12-15 02:15:42.337718] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.337733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.337793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.337812] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:17.782 [2024-12-15 02:15:42.337855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.337871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.337898] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.337930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:17.782 [2024-12-15 02:15:42.337947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.337961] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.396544] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.396654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:17.782 [2024-12-15 02:15:42.396692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.396709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.444516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.446781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:17.782 [2024-12-15 02:15:42.446831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.446839] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.446876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.446883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:17.782 [2024-12-15 02:15:42.446889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.446895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.446916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.446927] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:17.782 [2024-12-15 02:15:42.446933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.446939] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.447010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.447018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:17.782 [2024-12-15 02:15:42.447024] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.447030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.447054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.447061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:17.782 [2024-12-15 02:15:42.447069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.447075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.447104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.447110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:17.782 [2024-12-15 02:15:42.447116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.447122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.447153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:17.782 [2024-12-15 02:15:42.447162] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:17.782 [2024-12-15 02:15:42.447168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:17.782 [2024-12-15 02:15:42.447174] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:17.782 [2024-12-15 02:15:42.447286] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 259.471 ms, result 0 00:20:18.350 00:20:18.350 00:20:18.350 02:15:43 ftl.ftl_trim -- ftl/trim.sh@93 -- # svcpid=78771 00:20:18.350 02:15:43 ftl.ftl_trim -- ftl/trim.sh@94 -- # waitforlisten 78771 00:20:18.350 02:15:43 ftl.ftl_trim -- common/autotest_common.sh@835 -- # '[' -z 78771 ']' 00:20:18.350 02:15:43 ftl.ftl_trim -- ftl/trim.sh@92 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -L ftl_init 00:20:18.350 02:15:43 ftl.ftl_trim -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:18.350 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:18.350 02:15:43 ftl.ftl_trim -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:18.350 02:15:43 ftl.ftl_trim -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:18.350 02:15:43 ftl.ftl_trim -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:18.350 02:15:43 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:18.350 [2024-12-15 02:15:43.082637] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:18.350 [2024-12-15 02:15:43.082760] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78771 ] 00:20:18.609 [2024-12-15 02:15:43.239441] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:18.609 [2024-12-15 02:15:43.321783] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:19.176 02:15:43 ftl.ftl_trim -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:19.176 02:15:43 ftl.ftl_trim -- common/autotest_common.sh@868 -- # return 0 00:20:19.176 02:15:43 ftl.ftl_trim -- ftl/trim.sh@96 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py load_config 00:20:19.434 [2024-12-15 02:15:44.115841] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.434 [2024-12-15 02:15:44.115892] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:19.693 [2024-12-15 02:15:44.279771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.693 [2024-12-15 02:15:44.279807] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:19.693 [2024-12-15 02:15:44.279819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.693 [2024-12-15 02:15:44.279825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.693 [2024-12-15 02:15:44.281892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.693 [2024-12-15 02:15:44.281921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:19.693 [2024-12-15 02:15:44.281930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.052 ms 00:20:19.693 [2024-12-15 02:15:44.281936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.693 [2024-12-15 02:15:44.281992] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:19.693 [2024-12-15 02:15:44.282549] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:19.693 [2024-12-15 02:15:44.282571] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.693 [2024-12-15 02:15:44.282577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:19.693 [2024-12-15 02:15:44.282585] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:20:19.693 [2024-12-15 02:15:44.282591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.693 [2024-12-15 02:15:44.283564] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:19.693 [2024-12-15 02:15:44.293106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.693 [2024-12-15 02:15:44.293137] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:19.693 [2024-12-15 02:15:44.293146] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.545 ms 00:20:19.693 [2024-12-15 02:15:44.293153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.693 [2024-12-15 02:15:44.293226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.693 [2024-12-15 02:15:44.293237] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:19.693 [2024-12-15 02:15:44.293244] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:19.693 [2024-12-15 02:15:44.293251] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.693 [2024-12-15 02:15:44.297518] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.693 [2024-12-15 02:15:44.297546] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:19.693 [2024-12-15 02:15:44.297553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.217 ms 00:20:19.693 [2024-12-15 02:15:44.297560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.693 [2024-12-15 02:15:44.297639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.693 [2024-12-15 02:15:44.297649] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:19.693 [2024-12-15 02:15:44.297655] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.046 ms 00:20:19.693 [2024-12-15 02:15:44.297664] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.693 [2024-12-15 02:15:44.297683] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.694 [2024-12-15 02:15:44.297690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:19.694 [2024-12-15 02:15:44.297696] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:19.694 [2024-12-15 02:15:44.297702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.694 [2024-12-15 02:15:44.297719] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:19.694 [2024-12-15 02:15:44.300426] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.694 [2024-12-15 02:15:44.300447] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:19.694 [2024-12-15 02:15:44.300455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.709 ms 00:20:19.694 [2024-12-15 02:15:44.300461] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.694 [2024-12-15 02:15:44.300502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.694 [2024-12-15 02:15:44.300509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:19.694 [2024-12-15 02:15:44.300516] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.694 [2024-12-15 02:15:44.300523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.694 [2024-12-15 02:15:44.300539] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:19.694 [2024-12-15 02:15:44.300553] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:19.694 [2024-12-15 02:15:44.300586] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:19.694 [2024-12-15 02:15:44.300597] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:19.694 [2024-12-15 02:15:44.300676] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:19.694 [2024-12-15 02:15:44.300685] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:19.694 [2024-12-15 02:15:44.300696] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:19.694 [2024-12-15 02:15:44.300704] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:19.694 [2024-12-15 02:15:44.300711] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:19.694 [2024-12-15 02:15:44.300718] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:19.694 [2024-12-15 02:15:44.300725] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:19.694 [2024-12-15 02:15:44.300730] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:19.694 [2024-12-15 02:15:44.300738] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:19.694 [2024-12-15 02:15:44.300744] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.694 [2024-12-15 02:15:44.300750] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:19.694 [2024-12-15 02:15:44.300756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.207 ms 00:20:19.694 [2024-12-15 02:15:44.300762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.694 [2024-12-15 02:15:44.300829] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.694 [2024-12-15 02:15:44.300837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:19.694 [2024-12-15 02:15:44.300842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:19.694 [2024-12-15 02:15:44.300849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.694 [2024-12-15 02:15:44.300925] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:19.694 [2024-12-15 02:15:44.300933] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:19.694 [2024-12-15 02:15:44.300940] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.694 [2024-12-15 02:15:44.300947] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.694 [2024-12-15 02:15:44.300952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:19.694 [2024-12-15 02:15:44.300960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:19.694 [2024-12-15 02:15:44.300965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:19.694 [2024-12-15 02:15:44.300973] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:19.694 [2024-12-15 02:15:44.300978] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:19.694 [2024-12-15 02:15:44.300984] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.694 [2024-12-15 02:15:44.300989] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:19.694 [2024-12-15 02:15:44.300995] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:19.694 [2024-12-15 02:15:44.301000] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:19.694 [2024-12-15 02:15:44.301007] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:19.694 [2024-12-15 02:15:44.301013] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:19.694 [2024-12-15 02:15:44.301019] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:19.694 [2024-12-15 02:15:44.301032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:19.694 [2024-12-15 02:15:44.301041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301047] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:19.694 [2024-12-15 02:15:44.301052] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301058] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.694 [2024-12-15 02:15:44.301064] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:19.694 [2024-12-15 02:15:44.301071] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301076] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.694 [2024-12-15 02:15:44.301083] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:19.694 [2024-12-15 02:15:44.301088] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301094] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.694 [2024-12-15 02:15:44.301099] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:19.694 [2024-12-15 02:15:44.301105] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301110] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:19.694 [2024-12-15 02:15:44.301117] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:19.694 [2024-12-15 02:15:44.301121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301127] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.694 [2024-12-15 02:15:44.301133] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:19.694 [2024-12-15 02:15:44.301140] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:19.694 [2024-12-15 02:15:44.301145] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:19.694 [2024-12-15 02:15:44.301151] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:19.694 [2024-12-15 02:15:44.301156] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:19.694 [2024-12-15 02:15:44.301163] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301168] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:19.694 [2024-12-15 02:15:44.301174] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:19.694 [2024-12-15 02:15:44.301179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301185] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:19.694 [2024-12-15 02:15:44.301193] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:19.694 [2024-12-15 02:15:44.301214] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:19.694 [2024-12-15 02:15:44.301220] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:19.694 [2024-12-15 02:15:44.301228] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:19.694 [2024-12-15 02:15:44.301233] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:19.694 [2024-12-15 02:15:44.301239] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:19.694 [2024-12-15 02:15:44.301244] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:19.694 [2024-12-15 02:15:44.301251] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:19.694 [2024-12-15 02:15:44.301262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:19.694 [2024-12-15 02:15:44.301270] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:19.694 [2024-12-15 02:15:44.301277] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.694 [2024-12-15 02:15:44.301287] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:19.694 [2024-12-15 02:15:44.301293] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:19.694 [2024-12-15 02:15:44.301308] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:19.694 [2024-12-15 02:15:44.301314] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:19.694 [2024-12-15 02:15:44.301321] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:19.694 [2024-12-15 02:15:44.301327] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:19.694 [2024-12-15 02:15:44.301333] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:19.694 [2024-12-15 02:15:44.301339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:19.694 [2024-12-15 02:15:44.301345] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:19.694 [2024-12-15 02:15:44.301351] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:19.694 [2024-12-15 02:15:44.301358] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:19.694 [2024-12-15 02:15:44.301363] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:19.695 [2024-12-15 02:15:44.301370] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:19.695 [2024-12-15 02:15:44.301376] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:19.695 [2024-12-15 02:15:44.301382] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:19.695 [2024-12-15 02:15:44.301388] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:19.695 [2024-12-15 02:15:44.301396] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:19.695 [2024-12-15 02:15:44.301402] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:19.695 [2024-12-15 02:15:44.301409] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:19.695 [2024-12-15 02:15:44.301414] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:19.695 [2024-12-15 02:15:44.301422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.301427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:19.695 [2024-12-15 02:15:44.301434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.548 ms 00:20:19.695 [2024-12-15 02:15:44.301441] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.322023] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.322049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:19.695 [2024-12-15 02:15:44.322058] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.537 ms 00:20:19.695 [2024-12-15 02:15:44.322065] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.322153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.322160] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:19.695 [2024-12-15 02:15:44.322167] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.047 ms 00:20:19.695 [2024-12-15 02:15:44.322173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.345833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.345950] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:19.695 [2024-12-15 02:15:44.345965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.643 ms 00:20:19.695 [2024-12-15 02:15:44.345971] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.346017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.346024] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:19.695 [2024-12-15 02:15:44.346032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.695 [2024-12-15 02:15:44.346038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.346331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.346348] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:19.695 [2024-12-15 02:15:44.346358] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.276 ms 00:20:19.695 [2024-12-15 02:15:44.346364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.346463] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.346473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:19.695 [2024-12-15 02:15:44.346481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.082 ms 00:20:19.695 [2024-12-15 02:15:44.346486] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.357899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.357925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:19.695 [2024-12-15 02:15:44.357934] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.396 ms 00:20:19.695 [2024-12-15 02:15:44.357940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.389723] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:19.695 [2024-12-15 02:15:44.389784] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:19.695 [2024-12-15 02:15:44.389797] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.389804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:19.695 [2024-12-15 02:15:44.389813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.784 ms 00:20:19.695 [2024-12-15 02:15:44.389823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.408327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.408435] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:19.695 [2024-12-15 02:15:44.408451] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.445 ms 00:20:19.695 [2024-12-15 02:15:44.408458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.417272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.417296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:19.695 [2024-12-15 02:15:44.417307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.760 ms 00:20:19.695 [2024-12-15 02:15:44.417313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.426185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.426214] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:19.695 [2024-12-15 02:15:44.426223] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.831 ms 00:20:19.695 [2024-12-15 02:15:44.426228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.695 [2024-12-15 02:15:44.426691] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.695 [2024-12-15 02:15:44.426704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:19.695 [2024-12-15 02:15:44.426713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.400 ms 00:20:19.695 [2024-12-15 02:15:44.426719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.470083] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.954 [2024-12-15 02:15:44.470118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:19.954 [2024-12-15 02:15:44.470129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.344 ms 00:20:19.954 [2024-12-15 02:15:44.470136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.477707] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:19.954 [2024-12-15 02:15:44.488836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.954 [2024-12-15 02:15:44.488870] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:19.954 [2024-12-15 02:15:44.488881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.642 ms 00:20:19.954 [2024-12-15 02:15:44.488888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.488957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.954 [2024-12-15 02:15:44.488966] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:19.954 [2024-12-15 02:15:44.488973] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:19.954 [2024-12-15 02:15:44.488980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.489018] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.954 [2024-12-15 02:15:44.489027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:19.954 [2024-12-15 02:15:44.489033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.024 ms 00:20:19.954 [2024-12-15 02:15:44.489041] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.489059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.954 [2024-12-15 02:15:44.489067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:19.954 [2024-12-15 02:15:44.489073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:19.954 [2024-12-15 02:15:44.489081] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.489106] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:19.954 [2024-12-15 02:15:44.489116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.954 [2024-12-15 02:15:44.489123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:19.954 [2024-12-15 02:15:44.489131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:19.954 [2024-12-15 02:15:44.489136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.507060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.954 [2024-12-15 02:15:44.507087] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:19.954 [2024-12-15 02:15:44.507097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.904 ms 00:20:19.954 [2024-12-15 02:15:44.507104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.507170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:19.954 [2024-12-15 02:15:44.507179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:19.954 [2024-12-15 02:15:44.507187] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.025 ms 00:20:19.954 [2024-12-15 02:15:44.507207] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:19.954 [2024-12-15 02:15:44.507824] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:19.954 [2024-12-15 02:15:44.510065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.840 ms, result 0 00:20:19.954 [2024-12-15 02:15:44.511425] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:19.954 Some configs were skipped because the RPC state that can call them passed over. 00:20:19.954 02:15:44 ftl.ftl_trim -- ftl/trim.sh@99 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 0 --num_blocks 1024 00:20:20.213 [2024-12-15 02:15:44.735312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.213 [2024-12-15 02:15:44.735438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:20.213 [2024-12-15 02:15:44.735541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.838 ms 00:20:20.213 [2024-12-15 02:15:44.735570] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.213 [2024-12-15 02:15:44.735616] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.142 ms, result 0 00:20:20.213 true 00:20:20.213 02:15:44 ftl.ftl_trim -- ftl/trim.sh@100 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unmap -b ftl0 --lba 23591936 --num_blocks 1024 00:20:20.213 [2024-12-15 02:15:44.935434] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:20.213 [2024-12-15 02:15:44.935535] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Process trim 00:20:20.213 [2024-12-15 02:15:44.935580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.796 ms 00:20:20.213 [2024-12-15 02:15:44.935599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:20.213 [2024-12-15 02:15:44.935640] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL trim', duration = 1.003 ms, result 0 00:20:20.213 true 00:20:20.213 02:15:44 ftl.ftl_trim -- ftl/trim.sh@102 -- # killprocess 78771 00:20:20.213 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78771 ']' 00:20:20.213 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78771 00:20:20.213 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@959 -- # uname 00:20:20.213 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:20.213 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 78771 00:20:20.213 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:20.213 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:20.213 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@972 -- # echo 'killing process with pid 78771' 00:20:20.213 killing process with pid 78771 00:20:20.475 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@973 -- # kill 78771 00:20:20.475 02:15:44 ftl.ftl_trim -- common/autotest_common.sh@978 -- # wait 78771 00:20:21.043 [2024-12-15 02:15:45.513484] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.513531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:21.043 [2024-12-15 02:15:45.513541] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:21.043 [2024-12-15 02:15:45.513548] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.513567] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:21.043 [2024-12-15 02:15:45.515643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.515668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:21.043 [2024-12-15 02:15:45.515679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.063 ms 00:20:21.043 [2024-12-15 02:15:45.515685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.515909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.515916] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:21.043 [2024-12-15 02:15:45.515924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.203 ms 00:20:21.043 [2024-12-15 02:15:45.515929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.519262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.519288] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:21.043 [2024-12-15 02:15:45.519298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.316 ms 00:20:21.043 [2024-12-15 02:15:45.519304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.524467] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.524601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:21.043 [2024-12-15 02:15:45.524617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.135 ms 00:20:21.043 [2024-12-15 02:15:45.524623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.531795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.531895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:21.043 [2024-12-15 02:15:45.531911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.124 ms 00:20:21.043 [2024-12-15 02:15:45.531918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.538149] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.538174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:21.043 [2024-12-15 02:15:45.538184] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.201 ms 00:20:21.043 [2024-12-15 02:15:45.538190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.538305] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.538313] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:21.043 [2024-12-15 02:15:45.538321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:20:21.043 [2024-12-15 02:15:45.538326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.546180] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.546293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:21.043 [2024-12-15 02:15:45.546307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.837 ms 00:20:21.043 [2024-12-15 02:15:45.546314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.553689] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.553847] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:21.043 [2024-12-15 02:15:45.553895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.345 ms 00:20:21.043 [2024-12-15 02:15:45.553912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.560735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.560815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:21.043 [2024-12-15 02:15:45.560863] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.750 ms 00:20:21.043 [2024-12-15 02:15:45.560879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.043 [2024-12-15 02:15:45.568003] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.043 [2024-12-15 02:15:45.568082] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:21.044 [2024-12-15 02:15:45.568120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 7.067 ms 00:20:21.044 [2024-12-15 02:15:45.568136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.044 [2024-12-15 02:15:45.568169] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:21.044 [2024-12-15 02:15:45.568189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568769] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.568945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569075] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569193] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569634] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.569995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570132] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570208] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570235] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570277] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570716] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.570991] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.571013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.571036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.571058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.571083] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.571129] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.571153] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.571175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:21.044 [2024-12-15 02:15:45.571207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571253] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:21.045 [2024-12-15 02:15:45.571525] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:21.045 [2024-12-15 02:15:45.571546] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8fe8d93-efde-44df-91f9-97e8d7b46861 00:20:21.045 [2024-12-15 02:15:45.571570] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:21.045 [2024-12-15 02:15:45.571612] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:21.045 [2024-12-15 02:15:45.571628] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:21.045 [2024-12-15 02:15:45.571644] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:21.045 [2024-12-15 02:15:45.571659] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:21.045 [2024-12-15 02:15:45.571674] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:21.045 [2024-12-15 02:15:45.571689] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:21.045 [2024-12-15 02:15:45.571703] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:21.045 [2024-12-15 02:15:45.571738] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:21.045 [2024-12-15 02:15:45.571758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.045 [2024-12-15 02:15:45.571773] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:21.045 [2024-12-15 02:15:45.571791] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.590 ms 00:20:21.045 [2024-12-15 02:15:45.571805] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.581517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.045 [2024-12-15 02:15:45.581598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:21.045 [2024-12-15 02:15:45.581640] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.663 ms 00:20:21.045 [2024-12-15 02:15:45.581657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.581956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:21.045 [2024-12-15 02:15:45.582016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:21.045 [2024-12-15 02:15:45.582060] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:20:21.045 [2024-12-15 02:15:45.582076] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.617115] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.617208] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:21.045 [2024-12-15 02:15:45.617251] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.617275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.617357] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.617376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:21.045 [2024-12-15 02:15:45.617394] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.617408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.617454] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.617514] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:21.045 [2024-12-15 02:15:45.617539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.617553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.617578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.617594] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:21.045 [2024-12-15 02:15:45.617609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.617624] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.676493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.676620] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:21.045 [2024-12-15 02:15:45.676659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.676675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.724409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.724522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:21.045 [2024-12-15 02:15:45.724563] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.724582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.724654] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.724673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:21.045 [2024-12-15 02:15:45.724691] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.724705] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.724738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.724754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:21.045 [2024-12-15 02:15:45.724770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.724819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.724942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.724964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:21.045 [2024-12-15 02:15:45.724980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.724994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.725081] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.725100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:21.045 [2024-12-15 02:15:45.725117] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.725131] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.725173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.725229] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:21.045 [2024-12-15 02:15:45.725250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.725273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.725321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:21.045 [2024-12-15 02:15:45.725330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:21.045 [2024-12-15 02:15:45.725339] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:21.045 [2024-12-15 02:15:45.725345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:21.045 [2024-12-15 02:15:45.725454] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 211.949 ms, result 0 00:20:21.614 02:15:46 ftl.ftl_trim -- ftl/trim.sh@105 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/data --count=65536 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:21.614 [2024-12-15 02:15:46.305448] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:21.614 [2024-12-15 02:15:46.305575] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid78818 ] 00:20:21.872 [2024-12-15 02:15:46.463063] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:21.872 [2024-12-15 02:15:46.547325] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:22.131 [2024-12-15 02:15:46.755918] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.131 [2024-12-15 02:15:46.755970] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:22.391 [2024-12-15 02:15:46.903613] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.903753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:22.391 [2024-12-15 02:15:46.903768] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:22.391 [2024-12-15 02:15:46.903775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.905836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.905864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:22.391 [2024-12-15 02:15:46.905871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.045 ms 00:20:22.391 [2024-12-15 02:15:46.905877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.905933] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:22.391 [2024-12-15 02:15:46.906474] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:22.391 [2024-12-15 02:15:46.906489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.906495] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:22.391 [2024-12-15 02:15:46.906502] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.562 ms 00:20:22.391 [2024-12-15 02:15:46.906508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.907648] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:22.391 [2024-12-15 02:15:46.917113] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.917140] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:22.391 [2024-12-15 02:15:46.917148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.467 ms 00:20:22.391 [2024-12-15 02:15:46.917154] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.917237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.917246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:22.391 [2024-12-15 02:15:46.917253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:20:22.391 [2024-12-15 02:15:46.917274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.921530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.921554] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:22.391 [2024-12-15 02:15:46.921561] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.225 ms 00:20:22.391 [2024-12-15 02:15:46.921566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.921638] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.921646] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:22.391 [2024-12-15 02:15:46.921652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:20:22.391 [2024-12-15 02:15:46.921658] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.921675] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.921681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:22.391 [2024-12-15 02:15:46.921688] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:22.391 [2024-12-15 02:15:46.921693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.921710] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on ftl_core_thread 00:20:22.391 [2024-12-15 02:15:46.924314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.924336] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:22.391 [2024-12-15 02:15:46.924343] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:20:22.391 [2024-12-15 02:15:46.924349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.924377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.924384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:22.391 [2024-12-15 02:15:46.924390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:22.391 [2024-12-15 02:15:46.924395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.924410] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:22.391 [2024-12-15 02:15:46.924425] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:22.391 [2024-12-15 02:15:46.924450] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:22.391 [2024-12-15 02:15:46.924461] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:22.391 [2024-12-15 02:15:46.924539] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:22.391 [2024-12-15 02:15:46.924547] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:22.391 [2024-12-15 02:15:46.924555] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:22.391 [2024-12-15 02:15:46.924564] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:22.391 [2024-12-15 02:15:46.924571] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:22.391 [2024-12-15 02:15:46.924577] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 23592960 00:20:22.391 [2024-12-15 02:15:46.924583] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:22.391 [2024-12-15 02:15:46.924589] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:22.391 [2024-12-15 02:15:46.924594] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:22.391 [2024-12-15 02:15:46.924600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.924605] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:22.391 [2024-12-15 02:15:46.924611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.191 ms 00:20:22.391 [2024-12-15 02:15:46.924617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.924684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.391 [2024-12-15 02:15:46.924692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:22.391 [2024-12-15 02:15:46.924698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:22.391 [2024-12-15 02:15:46.924703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.391 [2024-12-15 02:15:46.924775] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:22.391 [2024-12-15 02:15:46.924782] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:22.391 [2024-12-15 02:15:46.924788] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.391 [2024-12-15 02:15:46.924794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.391 [2024-12-15 02:15:46.924799] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:22.391 [2024-12-15 02:15:46.924805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:22.391 [2024-12-15 02:15:46.924810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 90.00 MiB 00:20:22.391 [2024-12-15 02:15:46.924815] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:22.391 [2024-12-15 02:15:46.924821] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.12 MiB 00:20:22.391 [2024-12-15 02:15:46.924826] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.391 [2024-12-15 02:15:46.924831] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:22.392 [2024-12-15 02:15:46.924841] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 90.62 MiB 00:20:22.392 [2024-12-15 02:15:46.924846] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:22.392 [2024-12-15 02:15:46.924851] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:22.392 [2024-12-15 02:15:46.924856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.88 MiB 00:20:22.392 [2024-12-15 02:15:46.924861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.392 [2024-12-15 02:15:46.924866] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:22.392 [2024-12-15 02:15:46.924871] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 124.00 MiB 00:20:22.392 [2024-12-15 02:15:46.924876] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.392 [2024-12-15 02:15:46.924881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:22.392 [2024-12-15 02:15:46.924886] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 91.12 MiB 00:20:22.392 [2024-12-15 02:15:46.924891] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.392 [2024-12-15 02:15:46.924896] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:22.392 [2024-12-15 02:15:46.924901] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 99.12 MiB 00:20:22.392 [2024-12-15 02:15:46.924906] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.392 [2024-12-15 02:15:46.924911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:22.392 [2024-12-15 02:15:46.924916] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 107.12 MiB 00:20:22.392 [2024-12-15 02:15:46.924921] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.392 [2024-12-15 02:15:46.924927] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:22.392 [2024-12-15 02:15:46.924932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 115.12 MiB 00:20:22.392 [2024-12-15 02:15:46.924936] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:22.392 [2024-12-15 02:15:46.924941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:22.392 [2024-12-15 02:15:46.924946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.12 MiB 00:20:22.392 [2024-12-15 02:15:46.924951] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.392 [2024-12-15 02:15:46.924956] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:22.392 [2024-12-15 02:15:46.924960] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.38 MiB 00:20:22.392 [2024-12-15 02:15:46.924965] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:22.392 [2024-12-15 02:15:46.924970] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:22.392 [2024-12-15 02:15:46.924975] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.62 MiB 00:20:22.392 [2024-12-15 02:15:46.924980] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.392 [2024-12-15 02:15:46.924985] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:22.392 [2024-12-15 02:15:46.924990] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 123.75 MiB 00:20:22.392 [2024-12-15 02:15:46.924995] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.392 [2024-12-15 02:15:46.925002] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:22.392 [2024-12-15 02:15:46.925008] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:22.392 [2024-12-15 02:15:46.925015] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:22.392 [2024-12-15 02:15:46.925020] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:22.392 [2024-12-15 02:15:46.925026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:22.392 [2024-12-15 02:15:46.925031] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:22.392 [2024-12-15 02:15:46.925036] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:22.392 [2024-12-15 02:15:46.925041] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:22.392 [2024-12-15 02:15:46.925046] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:22.392 [2024-12-15 02:15:46.925050] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:22.392 [2024-12-15 02:15:46.925056] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:22.392 [2024-12-15 02:15:46.925064] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.392 [2024-12-15 02:15:46.925070] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5a00 00:20:22.392 [2024-12-15 02:15:46.925075] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5a20 blk_sz:0x80 00:20:22.392 [2024-12-15 02:15:46.925081] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x5aa0 blk_sz:0x80 00:20:22.392 [2024-12-15 02:15:46.925086] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5b20 blk_sz:0x800 00:20:22.392 [2024-12-15 02:15:46.925092] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x6320 blk_sz:0x800 00:20:22.392 [2024-12-15 02:15:46.925097] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6b20 blk_sz:0x800 00:20:22.392 [2024-12-15 02:15:46.925103] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x7320 blk_sz:0x800 00:20:22.392 [2024-12-15 02:15:46.925108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7b20 blk_sz:0x40 00:20:22.392 [2024-12-15 02:15:46.925114] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7b60 blk_sz:0x40 00:20:22.392 [2024-12-15 02:15:46.925119] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x7ba0 blk_sz:0x20 00:20:22.392 [2024-12-15 02:15:46.925124] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x7bc0 blk_sz:0x20 00:20:22.392 [2024-12-15 02:15:46.925130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x7be0 blk_sz:0x20 00:20:22.392 [2024-12-15 02:15:46.925135] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7c00 blk_sz:0x20 00:20:22.392 [2024-12-15 02:15:46.925141] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7c20 blk_sz:0x13b6e0 00:20:22.392 [2024-12-15 02:15:46.925146] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:22.392 [2024-12-15 02:15:46.925152] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:22.392 [2024-12-15 02:15:46.925158] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:22.392 [2024-12-15 02:15:46.925164] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:22.392 [2024-12-15 02:15:46.925170] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:22.392 [2024-12-15 02:15:46.925175] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:22.392 [2024-12-15 02:15:46.925181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:46.925188] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:22.392 [2024-12-15 02:15:46.925211] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.458 ms 00:20:22.392 [2024-12-15 02:15:46.925217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.392 [2024-12-15 02:15:46.945851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:46.945955] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:22.392 [2024-12-15 02:15:46.945968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.594 ms 00:20:22.392 [2024-12-15 02:15:46.945974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.392 [2024-12-15 02:15:46.946069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:46.946077] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:22.392 [2024-12-15 02:15:46.946083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:20:22.392 [2024-12-15 02:15:46.946089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.392 [2024-12-15 02:15:46.988753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:46.988783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:22.392 [2024-12-15 02:15:46.988794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.648 ms 00:20:22.392 [2024-12-15 02:15:46.988800] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.392 [2024-12-15 02:15:46.988858] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:46.988866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:22.392 [2024-12-15 02:15:46.988873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:20:22.392 [2024-12-15 02:15:46.988879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.392 [2024-12-15 02:15:46.989164] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:46.989176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:22.392 [2024-12-15 02:15:46.989183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.271 ms 00:20:22.392 [2024-12-15 02:15:46.989193] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.392 [2024-12-15 02:15:46.989330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:46.989339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:22.392 [2024-12-15 02:15:46.989345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.098 ms 00:20:22.392 [2024-12-15 02:15:46.989351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.392 [2024-12-15 02:15:47.000009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:47.000122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:22.392 [2024-12-15 02:15:47.000134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.643 ms 00:20:22.392 [2024-12-15 02:15:47.000140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.392 [2024-12-15 02:15:47.009889] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:20:22.392 [2024-12-15 02:15:47.009993] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:22.392 [2024-12-15 02:15:47.010044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.392 [2024-12-15 02:15:47.010060] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:22.392 [2024-12-15 02:15:47.010075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.789 ms 00:20:22.393 [2024-12-15 02:15:47.010088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.028302] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.028390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:22.393 [2024-12-15 02:15:47.028433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.164 ms 00:20:22.393 [2024-12-15 02:15:47.028451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.037151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.037244] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:22.393 [2024-12-15 02:15:47.037347] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.641 ms 00:20:22.393 [2024-12-15 02:15:47.037364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.046004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.046089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:22.393 [2024-12-15 02:15:47.046128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.594 ms 00:20:22.393 [2024-12-15 02:15:47.046145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.046608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.046681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:22.393 [2024-12-15 02:15:47.046720] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.391 ms 00:20:22.393 [2024-12-15 02:15:47.046737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.089655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.089774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:22.393 [2024-12-15 02:15:47.089814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 42.889 ms 00:20:22.393 [2024-12-15 02:15:47.089831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.097594] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 59 (of 60) MiB 00:20:22.393 [2024-12-15 02:15:47.108757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.108865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:22.393 [2024-12-15 02:15:47.108902] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.859 ms 00:20:22.393 [2024-12-15 02:15:47.108923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.109007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.109028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:22.393 [2024-12-15 02:15:47.109043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:20:22.393 [2024-12-15 02:15:47.109057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.109104] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.109121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:22.393 [2024-12-15 02:15:47.109137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:20:22.393 [2024-12-15 02:15:47.109222] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.109280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.109299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:22.393 [2024-12-15 02:15:47.109315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:22.393 [2024-12-15 02:15:47.109331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.109367] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:22.393 [2024-12-15 02:15:47.109468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.109483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:22.393 [2024-12-15 02:15:47.109498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.102 ms 00:20:22.393 [2024-12-15 02:15:47.109512] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.127293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.127384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:22.393 [2024-12-15 02:15:47.127423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.720 ms 00:20:22.393 [2024-12-15 02:15:47.127440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.127512] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:22.393 [2024-12-15 02:15:47.127532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:22.393 [2024-12-15 02:15:47.127548] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:20:22.393 [2024-12-15 02:15:47.127563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:22.393 [2024-12-15 02:15:47.128231] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:22.393 [2024-12-15 02:15:47.130615] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 224.372 ms, result 0 00:20:22.393 [2024-12-15 02:15:47.131373] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:22.393 [2024-12-15 02:15:47.146158] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:23.770  [2024-12-15T02:15:49.479Z] Copying: 58/256 [MB] (58 MBps) [2024-12-15T02:15:50.424Z] Copying: 72/256 [MB] (14 MBps) [2024-12-15T02:15:51.370Z] Copying: 84400/262144 [kB] (10136 kBps) [2024-12-15T02:15:52.315Z] Copying: 92/256 [MB] (10 MBps) [2024-12-15T02:15:53.260Z] Copying: 102/256 [MB] (10 MBps) [2024-12-15T02:15:54.300Z] Copying: 126/256 [MB] (23 MBps) [2024-12-15T02:15:55.237Z] Copying: 144/256 [MB] (18 MBps) [2024-12-15T02:15:56.622Z] Copying: 168/256 [MB] (24 MBps) [2024-12-15T02:15:57.194Z] Copying: 188/256 [MB] (19 MBps) [2024-12-15T02:15:58.582Z] Copying: 228/256 [MB] (40 MBps) [2024-12-15T02:15:58.841Z] Copying: 245/256 [MB] (17 MBps) [2024-12-15T02:15:59.412Z] Copying: 256/256 [MB] (average 21 MBps)[2024-12-15 02:15:59.180235] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:34.647 [2024-12-15 02:15:59.188684] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.647 [2024-12-15 02:15:59.188727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:34.647 [2024-12-15 02:15:59.188745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:34.648 [2024-12-15 02:15:59.188753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.188777] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on ftl_core_thread 00:20:34.648 [2024-12-15 02:15:59.191066] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.191101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:34.648 [2024-12-15 02:15:59.191111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.276 ms 00:20:34.648 [2024-12-15 02:15:59.191117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.191403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.191414] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:34.648 [2024-12-15 02:15:59.191423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.242 ms 00:20:34.648 [2024-12-15 02:15:59.191430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.194245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.194263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:34.648 [2024-12-15 02:15:59.194271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.797 ms 00:20:34.648 [2024-12-15 02:15:59.194277] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.199441] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.199593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:34.648 [2024-12-15 02:15:59.199610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.150 ms 00:20:34.648 [2024-12-15 02:15:59.199617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.221334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.221370] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:34.648 [2024-12-15 02:15:59.221380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.671 ms 00:20:34.648 [2024-12-15 02:15:59.221387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.234187] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.234232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:34.648 [2024-12-15 02:15:59.234247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.761 ms 00:20:34.648 [2024-12-15 02:15:59.234254] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.234377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.234386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:34.648 [2024-12-15 02:15:59.234400] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.084 ms 00:20:34.648 [2024-12-15 02:15:59.234407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.252861] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.252894] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:34.648 [2024-12-15 02:15:59.252904] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.440 ms 00:20:34.648 [2024-12-15 02:15:59.252910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.270910] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.270937] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:34.648 [2024-12-15 02:15:59.270946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.966 ms 00:20:34.648 [2024-12-15 02:15:59.270952] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.288110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.288135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:34.648 [2024-12-15 02:15:59.288143] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.129 ms 00:20:34.648 [2024-12-15 02:15:59.288149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.305250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.648 [2024-12-15 02:15:59.305279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:34.648 [2024-12-15 02:15:59.305287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.037 ms 00:20:34.648 [2024-12-15 02:15:59.305293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.648 [2024-12-15 02:15:59.305320] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:34.648 [2024-12-15 02:15:59.305331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305357] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305398] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305404] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305476] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305519] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305525] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305548] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305576] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305605] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305611] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305623] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:34.648 [2024-12-15 02:15:59.305635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305641] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305715] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305753] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305770] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305781] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305793] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305804] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305859] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305896] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:34.649 [2024-12-15 02:15:59.305920] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:34.649 [2024-12-15 02:15:59.305925] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b8fe8d93-efde-44df-91f9-97e8d7b46861 00:20:34.649 [2024-12-15 02:15:59.305932] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:34.649 [2024-12-15 02:15:59.305937] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:34.649 [2024-12-15 02:15:59.305942] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:34.649 [2024-12-15 02:15:59.305948] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:34.649 [2024-12-15 02:15:59.305953] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:34.649 [2024-12-15 02:15:59.305959] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:34.649 [2024-12-15 02:15:59.305967] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:34.649 [2024-12-15 02:15:59.305972] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:34.649 [2024-12-15 02:15:59.305977] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:34.649 [2024-12-15 02:15:59.305983] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.649 [2024-12-15 02:15:59.305988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:34.649 [2024-12-15 02:15:59.305995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.663 ms 00:20:34.649 [2024-12-15 02:15:59.306000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.649 [2024-12-15 02:15:59.315539] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.649 [2024-12-15 02:15:59.315562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:34.649 [2024-12-15 02:15:59.315570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.525 ms 00:20:34.649 [2024-12-15 02:15:59.315576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.649 [2024-12-15 02:15:59.315854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:34.649 [2024-12-15 02:15:59.315861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:34.649 [2024-12-15 02:15:59.315867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.250 ms 00:20:34.649 [2024-12-15 02:15:59.315872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.649 [2024-12-15 02:15:59.343422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.649 [2024-12-15 02:15:59.343448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:34.649 [2024-12-15 02:15:59.343456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.649 [2024-12-15 02:15:59.343466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.649 [2024-12-15 02:15:59.343520] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.649 [2024-12-15 02:15:59.343527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:34.649 [2024-12-15 02:15:59.343532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.649 [2024-12-15 02:15:59.343538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.649 [2024-12-15 02:15:59.343569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.649 [2024-12-15 02:15:59.343576] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:34.649 [2024-12-15 02:15:59.343582] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.649 [2024-12-15 02:15:59.343587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.649 [2024-12-15 02:15:59.343602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.649 [2024-12-15 02:15:59.343608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:34.649 [2024-12-15 02:15:59.343613] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.649 [2024-12-15 02:15:59.343618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.649 [2024-12-15 02:15:59.402604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.649 [2024-12-15 02:15:59.402639] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:34.649 [2024-12-15 02:15:59.402648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.649 [2024-12-15 02:15:59.402654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.908 [2024-12-15 02:15:59.450972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.908 [2024-12-15 02:15:59.451004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:34.908 [2024-12-15 02:15:59.451012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.908 [2024-12-15 02:15:59.451018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.908 [2024-12-15 02:15:59.451056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.908 [2024-12-15 02:15:59.451063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:34.908 [2024-12-15 02:15:59.451069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.908 [2024-12-15 02:15:59.451075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.908 [2024-12-15 02:15:59.451097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.908 [2024-12-15 02:15:59.451107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:34.908 [2024-12-15 02:15:59.451113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.908 [2024-12-15 02:15:59.451119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.908 [2024-12-15 02:15:59.451185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.908 [2024-12-15 02:15:59.451193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:34.908 [2024-12-15 02:15:59.451214] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.908 [2024-12-15 02:15:59.451220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.908 [2024-12-15 02:15:59.451245] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.908 [2024-12-15 02:15:59.451252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:34.908 [2024-12-15 02:15:59.451261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.908 [2024-12-15 02:15:59.451267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.908 [2024-12-15 02:15:59.451295] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.908 [2024-12-15 02:15:59.451302] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:34.908 [2024-12-15 02:15:59.451307] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.908 [2024-12-15 02:15:59.451313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.908 [2024-12-15 02:15:59.451345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:34.908 [2024-12-15 02:15:59.451355] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:34.908 [2024-12-15 02:15:59.451361] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:34.908 [2024-12-15 02:15:59.451367] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:34.908 [2024-12-15 02:15:59.451472] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 262.787 ms, result 0 00:20:35.476 00:20:35.476 00:20:35.476 02:15:59 ftl.ftl_trim -- ftl/trim.sh@106 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:36.046 /home/vagrant/spdk_repo/spdk/test/ftl/data: OK 00:20:36.046 02:16:00 ftl.ftl_trim -- ftl/trim.sh@108 -- # trap - SIGINT SIGTERM EXIT 00:20:36.046 02:16:00 ftl.ftl_trim -- ftl/trim.sh@109 -- # fio_kill 00:20:36.046 02:16:00 ftl.ftl_trim -- ftl/trim.sh@15 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:20:36.046 02:16:00 ftl.ftl_trim -- ftl/trim.sh@16 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:36.046 02:16:00 ftl.ftl_trim -- ftl/trim.sh@17 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/random_pattern 00:20:36.046 02:16:00 ftl.ftl_trim -- ftl/trim.sh@18 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/data 00:20:36.046 02:16:00 ftl.ftl_trim -- ftl/trim.sh@20 -- # killprocess 78771 00:20:36.046 Process with pid 78771 is not found 00:20:36.046 02:16:00 ftl.ftl_trim -- common/autotest_common.sh@954 -- # '[' -z 78771 ']' 00:20:36.046 02:16:00 ftl.ftl_trim -- common/autotest_common.sh@958 -- # kill -0 78771 00:20:36.046 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (78771) - No such process 00:20:36.046 02:16:00 ftl.ftl_trim -- common/autotest_common.sh@981 -- # echo 'Process with pid 78771 is not found' 00:20:36.046 00:20:36.046 real 1m12.360s 00:20:36.046 user 1m28.552s 00:20:36.046 sys 0m15.264s 00:20:36.046 02:16:00 ftl.ftl_trim -- common/autotest_common.sh@1130 -- # xtrace_disable 00:20:36.046 02:16:00 ftl.ftl_trim -- common/autotest_common.sh@10 -- # set +x 00:20:36.046 ************************************ 00:20:36.046 END TEST ftl_trim 00:20:36.046 ************************************ 00:20:36.046 02:16:00 ftl -- ftl/ftl.sh@76 -- # run_test ftl_restore /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:36.046 02:16:00 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:20:36.046 02:16:00 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:20:36.046 02:16:00 ftl -- common/autotest_common.sh@10 -- # set +x 00:20:36.046 ************************************ 00:20:36.046 START TEST ftl_restore 00:20:36.046 ************************************ 00:20:36.046 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -c 0000:00:10.0 0000:00:11.0 00:20:36.046 * Looking for test storage... 00:20:36.046 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:20:36.046 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:20:36.046 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lcov --version 00:20:36.046 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:20:36.306 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@333 -- # local ver1 ver1_l 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@334 -- # local ver2 ver2_l 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@336 -- # IFS=.-: 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@336 -- # read -ra ver1 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@337 -- # IFS=.-: 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@337 -- # read -ra ver2 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@338 -- # local 'op=<' 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@340 -- # ver1_l=2 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@341 -- # ver2_l=1 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@344 -- # case "$op" in 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@345 -- # : 1 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v = 0 )) 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@365 -- # decimal 1 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=1 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 1 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@365 -- # ver1[v]=1 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@366 -- # decimal 2 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@353 -- # local d=2 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@355 -- # echo 2 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@366 -- # ver2[v]=2 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:20:36.306 02:16:00 ftl.ftl_restore -- scripts/common.sh@368 -- # return 0 00:20:36.306 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:20:36.306 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:20:36.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.306 --rc genhtml_branch_coverage=1 00:20:36.306 --rc genhtml_function_coverage=1 00:20:36.306 --rc genhtml_legend=1 00:20:36.306 --rc geninfo_all_blocks=1 00:20:36.306 --rc geninfo_unexecuted_blocks=1 00:20:36.306 00:20:36.306 ' 00:20:36.306 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:20:36.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.306 --rc genhtml_branch_coverage=1 00:20:36.306 --rc genhtml_function_coverage=1 00:20:36.306 --rc genhtml_legend=1 00:20:36.306 --rc geninfo_all_blocks=1 00:20:36.306 --rc geninfo_unexecuted_blocks=1 00:20:36.306 00:20:36.306 ' 00:20:36.306 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:20:36.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.306 --rc genhtml_branch_coverage=1 00:20:36.306 --rc genhtml_function_coverage=1 00:20:36.306 --rc genhtml_legend=1 00:20:36.306 --rc geninfo_all_blocks=1 00:20:36.306 --rc geninfo_unexecuted_blocks=1 00:20:36.306 00:20:36.306 ' 00:20:36.306 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:20:36.306 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:20:36.306 --rc genhtml_branch_coverage=1 00:20:36.306 --rc genhtml_function_coverage=1 00:20:36.306 --rc genhtml_legend=1 00:20:36.306 --rc geninfo_all_blocks=1 00:20:36.306 --rc geninfo_unexecuted_blocks=1 00:20:36.306 00:20:36.306 ' 00:20:36.306 02:16:00 ftl.ftl_restore -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:20:36.306 02:16:00 ftl.ftl_restore -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:20:36.306 02:16:00 ftl.ftl_restore -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:20:36.306 02:16:00 ftl.ftl_restore -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@23 -- # spdk_ini_pid= 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@13 -- # mktemp -d 00:20:36.307 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.rYgkVDsYWW 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@16 -- # case $opt in 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@23 -- # shift 2 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@25 -- # timeout=240 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@39 -- # svcpid=79033 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@41 -- # waitforlisten 79033 00:20:36.307 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@835 -- # '[' -z 79033 ']' 00:20:36.307 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:20:36.307 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@840 -- # local max_retries=100 00:20:36.307 02:16:00 ftl.ftl_restore -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:20:36.307 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:20:36.307 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@844 -- # xtrace_disable 00:20:36.307 02:16:00 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:20:36.307 [2024-12-15 02:16:00.933714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:36.307 [2024-12-15 02:16:00.933838] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79033 ] 00:20:36.565 [2024-12-15 02:16:01.091409] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:36.565 [2024-12-15 02:16:01.177737] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:37.130 02:16:01 ftl.ftl_restore -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:20:37.130 02:16:01 ftl.ftl_restore -- common/autotest_common.sh@868 -- # return 0 00:20:37.130 02:16:01 ftl.ftl_restore -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:20:37.130 02:16:01 ftl.ftl_restore -- ftl/common.sh@54 -- # local name=nvme0 00:20:37.130 02:16:01 ftl.ftl_restore -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:20:37.130 02:16:01 ftl.ftl_restore -- ftl/common.sh@56 -- # local size=103424 00:20:37.130 02:16:01 ftl.ftl_restore -- ftl/common.sh@59 -- # local base_bdev 00:20:37.130 02:16:01 ftl.ftl_restore -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:20:37.389 02:16:02 ftl.ftl_restore -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:20:37.389 02:16:02 ftl.ftl_restore -- ftl/common.sh@62 -- # local base_size 00:20:37.389 02:16:02 ftl.ftl_restore -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:20:37.389 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:20:37.389 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:37.389 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:37.389 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:37.389 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:20:37.647 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:37.647 { 00:20:37.647 "name": "nvme0n1", 00:20:37.647 "aliases": [ 00:20:37.647 "c1e3f191-4a46-4bdb-8fce-5f29e216af2a" 00:20:37.647 ], 00:20:37.647 "product_name": "NVMe disk", 00:20:37.647 "block_size": 4096, 00:20:37.647 "num_blocks": 1310720, 00:20:37.647 "uuid": "c1e3f191-4a46-4bdb-8fce-5f29e216af2a", 00:20:37.647 "numa_id": -1, 00:20:37.647 "assigned_rate_limits": { 00:20:37.647 "rw_ios_per_sec": 0, 00:20:37.647 "rw_mbytes_per_sec": 0, 00:20:37.647 "r_mbytes_per_sec": 0, 00:20:37.647 "w_mbytes_per_sec": 0 00:20:37.647 }, 00:20:37.647 "claimed": true, 00:20:37.647 "claim_type": "read_many_write_one", 00:20:37.647 "zoned": false, 00:20:37.647 "supported_io_types": { 00:20:37.647 "read": true, 00:20:37.647 "write": true, 00:20:37.647 "unmap": true, 00:20:37.647 "flush": true, 00:20:37.647 "reset": true, 00:20:37.647 "nvme_admin": true, 00:20:37.647 "nvme_io": true, 00:20:37.647 "nvme_io_md": false, 00:20:37.647 "write_zeroes": true, 00:20:37.647 "zcopy": false, 00:20:37.647 "get_zone_info": false, 00:20:37.647 "zone_management": false, 00:20:37.647 "zone_append": false, 00:20:37.647 "compare": true, 00:20:37.647 "compare_and_write": false, 00:20:37.647 "abort": true, 00:20:37.647 "seek_hole": false, 00:20:37.647 "seek_data": false, 00:20:37.647 "copy": true, 00:20:37.647 "nvme_iov_md": false 00:20:37.647 }, 00:20:37.647 "driver_specific": { 00:20:37.647 "nvme": [ 00:20:37.647 { 00:20:37.647 "pci_address": "0000:00:11.0", 00:20:37.647 "trid": { 00:20:37.647 "trtype": "PCIe", 00:20:37.647 "traddr": "0000:00:11.0" 00:20:37.647 }, 00:20:37.647 "ctrlr_data": { 00:20:37.647 "cntlid": 0, 00:20:37.647 "vendor_id": "0x1b36", 00:20:37.647 "model_number": "QEMU NVMe Ctrl", 00:20:37.647 "serial_number": "12341", 00:20:37.647 "firmware_revision": "8.0.0", 00:20:37.647 "subnqn": "nqn.2019-08.org.qemu:12341", 00:20:37.647 "oacs": { 00:20:37.647 "security": 0, 00:20:37.647 "format": 1, 00:20:37.647 "firmware": 0, 00:20:37.647 "ns_manage": 1 00:20:37.647 }, 00:20:37.647 "multi_ctrlr": false, 00:20:37.647 "ana_reporting": false 00:20:37.647 }, 00:20:37.647 "vs": { 00:20:37.647 "nvme_version": "1.4" 00:20:37.647 }, 00:20:37.647 "ns_data": { 00:20:37.647 "id": 1, 00:20:37.647 "can_share": false 00:20:37.647 } 00:20:37.647 } 00:20:37.647 ], 00:20:37.647 "mp_policy": "active_passive" 00:20:37.647 } 00:20:37.647 } 00:20:37.647 ]' 00:20:37.647 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:37.647 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:37.647 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:37.647 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=1310720 00:20:37.647 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:20:37.647 02:16:02 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 5120 00:20:37.647 02:16:02 ftl.ftl_restore -- ftl/common.sh@63 -- # base_size=5120 00:20:37.647 02:16:02 ftl.ftl_restore -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:20:37.647 02:16:02 ftl.ftl_restore -- ftl/common.sh@67 -- # clear_lvols 00:20:37.647 02:16:02 ftl.ftl_restore -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:20:37.647 02:16:02 ftl.ftl_restore -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:20:37.906 02:16:02 ftl.ftl_restore -- ftl/common.sh@28 -- # stores=de5a7ace-3a16-4412-a20b-304b998b6e1e 00:20:37.906 02:16:02 ftl.ftl_restore -- ftl/common.sh@29 -- # for lvs in $stores 00:20:37.906 02:16:02 ftl.ftl_restore -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u de5a7ace-3a16-4412-a20b-304b998b6e1e 00:20:38.164 02:16:02 ftl.ftl_restore -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:20:38.164 02:16:02 ftl.ftl_restore -- ftl/common.sh@68 -- # lvs=756fbd24-a8fa-4c9e-9f77-40552f7ca0f4 00:20:38.164 02:16:02 ftl.ftl_restore -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 756fbd24-a8fa-4c9e-9f77-40552f7ca0f4 00:20:38.423 02:16:03 ftl.ftl_restore -- ftl/restore.sh@43 -- # split_bdev=667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:38.423 02:16:03 ftl.ftl_restore -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:20:38.423 02:16:03 ftl.ftl_restore -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:38.423 02:16:03 ftl.ftl_restore -- ftl/common.sh@35 -- # local name=nvc0 00:20:38.423 02:16:03 ftl.ftl_restore -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:20:38.423 02:16:03 ftl.ftl_restore -- ftl/common.sh@37 -- # local base_bdev=667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:38.423 02:16:03 ftl.ftl_restore -- ftl/common.sh@38 -- # local cache_size= 00:20:38.423 02:16:03 ftl.ftl_restore -- ftl/common.sh@41 -- # get_bdev_size 667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:38.423 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:38.423 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:38.423 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:38.423 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:38.423 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:38.681 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:38.681 { 00:20:38.681 "name": "667f09c0-2475-4601-a14b-7783fbab2c4e", 00:20:38.681 "aliases": [ 00:20:38.681 "lvs/nvme0n1p0" 00:20:38.681 ], 00:20:38.681 "product_name": "Logical Volume", 00:20:38.681 "block_size": 4096, 00:20:38.681 "num_blocks": 26476544, 00:20:38.681 "uuid": "667f09c0-2475-4601-a14b-7783fbab2c4e", 00:20:38.681 "assigned_rate_limits": { 00:20:38.681 "rw_ios_per_sec": 0, 00:20:38.681 "rw_mbytes_per_sec": 0, 00:20:38.681 "r_mbytes_per_sec": 0, 00:20:38.681 "w_mbytes_per_sec": 0 00:20:38.681 }, 00:20:38.681 "claimed": false, 00:20:38.681 "zoned": false, 00:20:38.681 "supported_io_types": { 00:20:38.681 "read": true, 00:20:38.681 "write": true, 00:20:38.681 "unmap": true, 00:20:38.681 "flush": false, 00:20:38.681 "reset": true, 00:20:38.681 "nvme_admin": false, 00:20:38.681 "nvme_io": false, 00:20:38.681 "nvme_io_md": false, 00:20:38.681 "write_zeroes": true, 00:20:38.681 "zcopy": false, 00:20:38.681 "get_zone_info": false, 00:20:38.681 "zone_management": false, 00:20:38.681 "zone_append": false, 00:20:38.681 "compare": false, 00:20:38.681 "compare_and_write": false, 00:20:38.682 "abort": false, 00:20:38.682 "seek_hole": true, 00:20:38.682 "seek_data": true, 00:20:38.682 "copy": false, 00:20:38.682 "nvme_iov_md": false 00:20:38.682 }, 00:20:38.682 "driver_specific": { 00:20:38.682 "lvol": { 00:20:38.682 "lvol_store_uuid": "756fbd24-a8fa-4c9e-9f77-40552f7ca0f4", 00:20:38.682 "base_bdev": "nvme0n1", 00:20:38.682 "thin_provision": true, 00:20:38.682 "num_allocated_clusters": 0, 00:20:38.682 "snapshot": false, 00:20:38.682 "clone": false, 00:20:38.682 "esnap_clone": false 00:20:38.682 } 00:20:38.682 } 00:20:38.682 } 00:20:38.682 ]' 00:20:38.682 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:38.682 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:38.682 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:38.682 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:38.682 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:38.682 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:38.682 02:16:03 ftl.ftl_restore -- ftl/common.sh@41 -- # local base_size=5171 00:20:38.682 02:16:03 ftl.ftl_restore -- ftl/common.sh@44 -- # local nvc_bdev 00:20:38.682 02:16:03 ftl.ftl_restore -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:20:38.940 02:16:03 ftl.ftl_restore -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:20:38.940 02:16:03 ftl.ftl_restore -- ftl/common.sh@47 -- # [[ -z '' ]] 00:20:38.940 02:16:03 ftl.ftl_restore -- ftl/common.sh@48 -- # get_bdev_size 667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:38.940 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:38.940 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:38.940 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:38.940 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:38.940 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:39.199 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:39.199 { 00:20:39.199 "name": "667f09c0-2475-4601-a14b-7783fbab2c4e", 00:20:39.199 "aliases": [ 00:20:39.199 "lvs/nvme0n1p0" 00:20:39.199 ], 00:20:39.199 "product_name": "Logical Volume", 00:20:39.199 "block_size": 4096, 00:20:39.199 "num_blocks": 26476544, 00:20:39.199 "uuid": "667f09c0-2475-4601-a14b-7783fbab2c4e", 00:20:39.199 "assigned_rate_limits": { 00:20:39.199 "rw_ios_per_sec": 0, 00:20:39.199 "rw_mbytes_per_sec": 0, 00:20:39.199 "r_mbytes_per_sec": 0, 00:20:39.199 "w_mbytes_per_sec": 0 00:20:39.199 }, 00:20:39.199 "claimed": false, 00:20:39.199 "zoned": false, 00:20:39.199 "supported_io_types": { 00:20:39.199 "read": true, 00:20:39.199 "write": true, 00:20:39.199 "unmap": true, 00:20:39.199 "flush": false, 00:20:39.199 "reset": true, 00:20:39.199 "nvme_admin": false, 00:20:39.199 "nvme_io": false, 00:20:39.199 "nvme_io_md": false, 00:20:39.199 "write_zeroes": true, 00:20:39.199 "zcopy": false, 00:20:39.199 "get_zone_info": false, 00:20:39.199 "zone_management": false, 00:20:39.199 "zone_append": false, 00:20:39.199 "compare": false, 00:20:39.199 "compare_and_write": false, 00:20:39.199 "abort": false, 00:20:39.199 "seek_hole": true, 00:20:39.199 "seek_data": true, 00:20:39.199 "copy": false, 00:20:39.199 "nvme_iov_md": false 00:20:39.199 }, 00:20:39.199 "driver_specific": { 00:20:39.199 "lvol": { 00:20:39.199 "lvol_store_uuid": "756fbd24-a8fa-4c9e-9f77-40552f7ca0f4", 00:20:39.199 "base_bdev": "nvme0n1", 00:20:39.199 "thin_provision": true, 00:20:39.199 "num_allocated_clusters": 0, 00:20:39.199 "snapshot": false, 00:20:39.199 "clone": false, 00:20:39.199 "esnap_clone": false 00:20:39.199 } 00:20:39.199 } 00:20:39.199 } 00:20:39.199 ]' 00:20:39.199 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:39.199 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:39.199 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:39.199 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:39.199 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:39.199 02:16:03 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:39.199 02:16:03 ftl.ftl_restore -- ftl/common.sh@48 -- # cache_size=5171 00:20:39.199 02:16:03 ftl.ftl_restore -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:20:39.458 02:16:04 ftl.ftl_restore -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:20:39.458 02:16:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # get_bdev_size 667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:39.458 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1382 -- # local bdev_name=667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:39.458 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1383 -- # local bdev_info 00:20:39.458 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1384 -- # local bs 00:20:39.458 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1385 -- # local nb 00:20:39.458 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 667f09c0-2475-4601-a14b-7783fbab2c4e 00:20:39.716 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:20:39.716 { 00:20:39.716 "name": "667f09c0-2475-4601-a14b-7783fbab2c4e", 00:20:39.716 "aliases": [ 00:20:39.716 "lvs/nvme0n1p0" 00:20:39.716 ], 00:20:39.716 "product_name": "Logical Volume", 00:20:39.716 "block_size": 4096, 00:20:39.716 "num_blocks": 26476544, 00:20:39.716 "uuid": "667f09c0-2475-4601-a14b-7783fbab2c4e", 00:20:39.716 "assigned_rate_limits": { 00:20:39.716 "rw_ios_per_sec": 0, 00:20:39.716 "rw_mbytes_per_sec": 0, 00:20:39.716 "r_mbytes_per_sec": 0, 00:20:39.716 "w_mbytes_per_sec": 0 00:20:39.716 }, 00:20:39.716 "claimed": false, 00:20:39.716 "zoned": false, 00:20:39.716 "supported_io_types": { 00:20:39.716 "read": true, 00:20:39.716 "write": true, 00:20:39.716 "unmap": true, 00:20:39.716 "flush": false, 00:20:39.716 "reset": true, 00:20:39.716 "nvme_admin": false, 00:20:39.716 "nvme_io": false, 00:20:39.716 "nvme_io_md": false, 00:20:39.716 "write_zeroes": true, 00:20:39.716 "zcopy": false, 00:20:39.716 "get_zone_info": false, 00:20:39.716 "zone_management": false, 00:20:39.716 "zone_append": false, 00:20:39.716 "compare": false, 00:20:39.716 "compare_and_write": false, 00:20:39.716 "abort": false, 00:20:39.716 "seek_hole": true, 00:20:39.716 "seek_data": true, 00:20:39.716 "copy": false, 00:20:39.716 "nvme_iov_md": false 00:20:39.716 }, 00:20:39.716 "driver_specific": { 00:20:39.716 "lvol": { 00:20:39.716 "lvol_store_uuid": "756fbd24-a8fa-4c9e-9f77-40552f7ca0f4", 00:20:39.716 "base_bdev": "nvme0n1", 00:20:39.716 "thin_provision": true, 00:20:39.716 "num_allocated_clusters": 0, 00:20:39.716 "snapshot": false, 00:20:39.716 "clone": false, 00:20:39.716 "esnap_clone": false 00:20:39.716 } 00:20:39.716 } 00:20:39.716 } 00:20:39.716 ]' 00:20:39.716 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:20:39.716 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1387 -- # bs=4096 00:20:39.716 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:20:39.716 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1388 -- # nb=26476544 00:20:39.716 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:20:39.716 02:16:04 ftl.ftl_restore -- common/autotest_common.sh@1392 -- # echo 103424 00:20:39.716 02:16:04 ftl.ftl_restore -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:20:39.716 02:16:04 ftl.ftl_restore -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 667f09c0-2475-4601-a14b-7783fbab2c4e --l2p_dram_limit 10' 00:20:39.716 02:16:04 ftl.ftl_restore -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:20:39.716 02:16:04 ftl.ftl_restore -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:20:39.716 02:16:04 ftl.ftl_restore -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:20:39.716 02:16:04 ftl.ftl_restore -- ftl/restore.sh@54 -- # '[' '' -eq 1 ']' 00:20:39.716 /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh: line 54: [: : integer expression expected 00:20:39.716 02:16:04 ftl.ftl_restore -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 667f09c0-2475-4601-a14b-7783fbab2c4e --l2p_dram_limit 10 -c nvc0n1p0 00:20:39.978 [2024-12-15 02:16:04.553575] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.978 [2024-12-15 02:16:04.553614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:39.978 [2024-12-15 02:16:04.553627] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:39.978 [2024-12-15 02:16:04.553633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.978 [2024-12-15 02:16:04.553677] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.978 [2024-12-15 02:16:04.553685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:39.978 [2024-12-15 02:16:04.553693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:20:39.978 [2024-12-15 02:16:04.553699] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.978 [2024-12-15 02:16:04.553718] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:39.978 [2024-12-15 02:16:04.554319] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:39.978 [2024-12-15 02:16:04.554335] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.978 [2024-12-15 02:16:04.554341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:39.978 [2024-12-15 02:16:04.554349] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.621 ms 00:20:39.978 [2024-12-15 02:16:04.554355] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.978 [2024-12-15 02:16:04.554379] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID b824f819-dc72-458f-a122-460abb6a208d 00:20:39.978 [2024-12-15 02:16:04.555322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.978 [2024-12-15 02:16:04.555345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:20:39.978 [2024-12-15 02:16:04.555352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:39.978 [2024-12-15 02:16:04.555360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.978 [2024-12-15 02:16:04.560074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.978 [2024-12-15 02:16:04.560106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:39.978 [2024-12-15 02:16:04.560113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.659 ms 00:20:39.979 [2024-12-15 02:16:04.560120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.979 [2024-12-15 02:16:04.560186] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.979 [2024-12-15 02:16:04.560204] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:39.979 [2024-12-15 02:16:04.560212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:20:39.979 [2024-12-15 02:16:04.560221] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.979 [2024-12-15 02:16:04.560261] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.979 [2024-12-15 02:16:04.560271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:39.979 [2024-12-15 02:16:04.560277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:20:39.979 [2024-12-15 02:16:04.560286] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.979 [2024-12-15 02:16:04.560303] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:39.979 [2024-12-15 02:16:04.563160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.979 [2024-12-15 02:16:04.563184] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:39.979 [2024-12-15 02:16:04.563200] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.859 ms 00:20:39.979 [2024-12-15 02:16:04.563206] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.979 [2024-12-15 02:16:04.563235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.979 [2024-12-15 02:16:04.563242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:39.979 [2024-12-15 02:16:04.563250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:20:39.979 [2024-12-15 02:16:04.563255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.979 [2024-12-15 02:16:04.563276] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:20:39.979 [2024-12-15 02:16:04.563386] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:39.979 [2024-12-15 02:16:04.563398] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:39.979 [2024-12-15 02:16:04.563406] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:39.979 [2024-12-15 02:16:04.563416] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563423] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563430] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:39.979 [2024-12-15 02:16:04.563436] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:39.979 [2024-12-15 02:16:04.563445] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:39.979 [2024-12-15 02:16:04.563451] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:39.979 [2024-12-15 02:16:04.563458] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.979 [2024-12-15 02:16:04.563468] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:39.979 [2024-12-15 02:16:04.563476] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.184 ms 00:20:39.979 [2024-12-15 02:16:04.563482] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.979 [2024-12-15 02:16:04.563548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.979 [2024-12-15 02:16:04.563555] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:39.979 [2024-12-15 02:16:04.563562] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:20:39.979 [2024-12-15 02:16:04.563567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.979 [2024-12-15 02:16:04.563643] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:39.979 [2024-12-15 02:16:04.563650] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:39.979 [2024-12-15 02:16:04.563657] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563663] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563670] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:39.979 [2024-12-15 02:16:04.563675] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563682] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563687] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:39.979 [2024-12-15 02:16:04.563693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.979 [2024-12-15 02:16:04.563705] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:39.979 [2024-12-15 02:16:04.563711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:39.979 [2024-12-15 02:16:04.563717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:39.979 [2024-12-15 02:16:04.563723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:39.979 [2024-12-15 02:16:04.563729] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:39.979 [2024-12-15 02:16:04.563736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563743] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:39.979 [2024-12-15 02:16:04.563748] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563755] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563760] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:39.979 [2024-12-15 02:16:04.563767] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563772] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563778] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:39.979 [2024-12-15 02:16:04.563783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563795] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:39.979 [2024-12-15 02:16:04.563801] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563806] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563812] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:39.979 [2024-12-15 02:16:04.563817] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563823] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563828] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:39.979 [2024-12-15 02:16:04.563835] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563840] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.979 [2024-12-15 02:16:04.563846] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:39.979 [2024-12-15 02:16:04.563851] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:39.979 [2024-12-15 02:16:04.563858] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:39.979 [2024-12-15 02:16:04.563863] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:39.979 [2024-12-15 02:16:04.563869] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:39.979 [2024-12-15 02:16:04.563875] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563881] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:39.979 [2024-12-15 02:16:04.563885] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:39.979 [2024-12-15 02:16:04.563892] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563896] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:39.979 [2024-12-15 02:16:04.563903] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:39.979 [2024-12-15 02:16:04.563909] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563916] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:39.979 [2024-12-15 02:16:04.563922] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:39.979 [2024-12-15 02:16:04.563930] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:39.979 [2024-12-15 02:16:04.563935] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:39.979 [2024-12-15 02:16:04.563941] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:39.979 [2024-12-15 02:16:04.563946] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:39.979 [2024-12-15 02:16:04.563952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:39.979 [2024-12-15 02:16:04.563959] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:39.979 [2024-12-15 02:16:04.563967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.979 [2024-12-15 02:16:04.563975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:39.979 [2024-12-15 02:16:04.563982] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:39.979 [2024-12-15 02:16:04.563988] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:39.979 [2024-12-15 02:16:04.563994] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:39.979 [2024-12-15 02:16:04.563999] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:39.979 [2024-12-15 02:16:04.564006] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:39.979 [2024-12-15 02:16:04.564011] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:39.979 [2024-12-15 02:16:04.564018] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:39.979 [2024-12-15 02:16:04.564024] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:39.980 [2024-12-15 02:16:04.564032] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:39.980 [2024-12-15 02:16:04.564037] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:39.980 [2024-12-15 02:16:04.564044] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:39.980 [2024-12-15 02:16:04.564049] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:39.980 [2024-12-15 02:16:04.564056] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:39.980 [2024-12-15 02:16:04.564061] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:39.980 [2024-12-15 02:16:04.564068] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:39.980 [2024-12-15 02:16:04.564074] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:39.980 [2024-12-15 02:16:04.564081] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:39.980 [2024-12-15 02:16:04.564087] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:39.980 [2024-12-15 02:16:04.564093] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:39.980 [2024-12-15 02:16:04.564099] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:39.980 [2024-12-15 02:16:04.564106] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:39.980 [2024-12-15 02:16:04.564112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.509 ms 00:20:39.980 [2024-12-15 02:16:04.564119] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:39.980 [2024-12-15 02:16:04.564160] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:20:39.980 [2024-12-15 02:16:04.564172] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:20:44.181 [2024-12-15 02:16:08.366694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.366783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:20:44.181 [2024-12-15 02:16:08.366802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3802.517 ms 00:20:44.181 [2024-12-15 02:16:08.366813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.398551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.398617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.181 [2024-12-15 02:16:08.398631] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.490 ms 00:20:44.181 [2024-12-15 02:16:08.398643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.398794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.398808] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:44.181 [2024-12-15 02:16:08.398818] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.071 ms 00:20:44.181 [2024-12-15 02:16:08.398835] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.434267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.434316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.181 [2024-12-15 02:16:08.434335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.396 ms 00:20:44.181 [2024-12-15 02:16:08.434346] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.434381] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.434395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.181 [2024-12-15 02:16:08.434404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:44.181 [2024-12-15 02:16:08.434422] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.434995] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.435025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.181 [2024-12-15 02:16:08.435035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.504 ms 00:20:44.181 [2024-12-15 02:16:08.435045] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.435160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.435171] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.181 [2024-12-15 02:16:08.435182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:20:44.181 [2024-12-15 02:16:08.435226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.452538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.453029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.181 [2024-12-15 02:16:08.453051] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.292 ms 00:20:44.181 [2024-12-15 02:16:08.453062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.482396] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:44.181 [2024-12-15 02:16:08.486167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.486384] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:44.181 [2024-12-15 02:16:08.486412] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 33.009 ms 00:20:44.181 [2024-12-15 02:16:08.486421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.581940] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.581998] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:20:44.181 [2024-12-15 02:16:08.582016] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 95.466 ms 00:20:44.181 [2024-12-15 02:16:08.582026] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.582262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.582280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:44.181 [2024-12-15 02:16:08.582297] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.180 ms 00:20:44.181 [2024-12-15 02:16:08.582305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.608841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.609038] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:20:44.181 [2024-12-15 02:16:08.609069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.476 ms 00:20:44.181 [2024-12-15 02:16:08.609079] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.634084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.634131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:20:44.181 [2024-12-15 02:16:08.634147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.911 ms 00:20:44.181 [2024-12-15 02:16:08.634155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.634784] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.634804] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:44.181 [2024-12-15 02:16:08.634816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.559 ms 00:20:44.181 [2024-12-15 02:16:08.634827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.717655] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.717706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:20:44.181 [2024-12-15 02:16:08.717725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 82.781 ms 00:20:44.181 [2024-12-15 02:16:08.717733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.745204] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.745252] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:20:44.181 [2024-12-15 02:16:08.745270] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.362 ms 00:20:44.181 [2024-12-15 02:16:08.745278] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.771060] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.771104] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:20:44.181 [2024-12-15 02:16:08.771120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.717 ms 00:20:44.181 [2024-12-15 02:16:08.771128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.797648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.797699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:44.181 [2024-12-15 02:16:08.797715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.464 ms 00:20:44.181 [2024-12-15 02:16:08.797723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.797781] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.797792] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:44.181 [2024-12-15 02:16:08.797806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:20:44.181 [2024-12-15 02:16:08.797815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.181 [2024-12-15 02:16:08.797914] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.181 [2024-12-15 02:16:08.798117] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:44.181 [2024-12-15 02:16:08.798128] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:20:44.182 [2024-12-15 02:16:08.798136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.182 [2024-12-15 02:16:08.799448] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4245.336 ms, result 0 00:20:44.182 { 00:20:44.182 "name": "ftl0", 00:20:44.182 "uuid": "b824f819-dc72-458f-a122-460abb6a208d" 00:20:44.182 } 00:20:44.182 02:16:08 ftl.ftl_restore -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:20:44.182 02:16:08 ftl.ftl_restore -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:20:44.443 02:16:09 ftl.ftl_restore -- ftl/restore.sh@63 -- # echo ']}' 00:20:44.443 02:16:09 ftl.ftl_restore -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:20:44.707 [2024-12-15 02:16:09.230485] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.707 [2024-12-15 02:16:09.230559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:20:44.707 [2024-12-15 02:16:09.230575] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:44.707 [2024-12-15 02:16:09.230587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.707 [2024-12-15 02:16:09.230613] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:20:44.707 [2024-12-15 02:16:09.233747] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.707 [2024-12-15 02:16:09.233932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:20:44.707 [2024-12-15 02:16:09.233960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.110 ms 00:20:44.707 [2024-12-15 02:16:09.233969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.707 [2024-12-15 02:16:09.234300] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.707 [2024-12-15 02:16:09.234316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:20:44.707 [2024-12-15 02:16:09.234328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.285 ms 00:20:44.707 [2024-12-15 02:16:09.234336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.707 [2024-12-15 02:16:09.237586] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.707 [2024-12-15 02:16:09.237610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:20:44.708 [2024-12-15 02:16:09.237623] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.231 ms 00:20:44.708 [2024-12-15 02:16:09.237631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.243758] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.708 [2024-12-15 02:16:09.243796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:20:44.708 [2024-12-15 02:16:09.243813] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.102 ms 00:20:44.708 [2024-12-15 02:16:09.243821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.270856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.708 [2024-12-15 02:16:09.270903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:20:44.708 [2024-12-15 02:16:09.270919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.938 ms 00:20:44.708 [2024-12-15 02:16:09.270927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.288665] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.708 [2024-12-15 02:16:09.288716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:20:44.708 [2024-12-15 02:16:09.288733] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.678 ms 00:20:44.708 [2024-12-15 02:16:09.288741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.288920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.708 [2024-12-15 02:16:09.288933] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:20:44.708 [2024-12-15 02:16:09.288945] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:20:44.708 [2024-12-15 02:16:09.288954] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.315275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.708 [2024-12-15 02:16:09.315323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:20:44.708 [2024-12-15 02:16:09.315338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.291 ms 00:20:44.708 [2024-12-15 02:16:09.315345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.340751] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.708 [2024-12-15 02:16:09.340799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:20:44.708 [2024-12-15 02:16:09.340815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.345 ms 00:20:44.708 [2024-12-15 02:16:09.340823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.365639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.708 [2024-12-15 02:16:09.365689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:20:44.708 [2024-12-15 02:16:09.365704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.757 ms 00:20:44.708 [2024-12-15 02:16:09.365712] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.390262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.708 [2024-12-15 02:16:09.390308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:20:44.708 [2024-12-15 02:16:09.390323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.448 ms 00:20:44.708 [2024-12-15 02:16:09.390331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.708 [2024-12-15 02:16:09.390380] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:20:44.708 [2024-12-15 02:16:09.390396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390412] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390459] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390508] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390533] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390559] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390581] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390588] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390678] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390686] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390777] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390796] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390841] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390852] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390871] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390906] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:20:44.708 [2024-12-15 02:16:09.390957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.390967] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.390975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.390987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391000] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391030] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391038] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391074] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391107] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391169] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391177] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391211] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391248] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391274] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391284] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391293] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391302] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391366] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:20:44.709 [2024-12-15 02:16:09.391391] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:20:44.709 [2024-12-15 02:16:09.391402] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b824f819-dc72-458f-a122-460abb6a208d 00:20:44.709 [2024-12-15 02:16:09.391410] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:20:44.709 [2024-12-15 02:16:09.391422] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:20:44.709 [2024-12-15 02:16:09.391433] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:20:44.709 [2024-12-15 02:16:09.391443] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:20:44.709 [2024-12-15 02:16:09.391451] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:20:44.709 [2024-12-15 02:16:09.391460] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:20:44.709 [2024-12-15 02:16:09.391468] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:20:44.709 [2024-12-15 02:16:09.391477] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:20:44.709 [2024-12-15 02:16:09.391484] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:20:44.709 [2024-12-15 02:16:09.391495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.709 [2024-12-15 02:16:09.391503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:20:44.709 [2024-12-15 02:16:09.391513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.117 ms 00:20:44.709 [2024-12-15 02:16:09.391523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.709 [2024-12-15 02:16:09.405191] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.709 [2024-12-15 02:16:09.405246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:20:44.709 [2024-12-15 02:16:09.405261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.622 ms 00:20:44.709 [2024-12-15 02:16:09.405269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.709 [2024-12-15 02:16:09.405695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:44.709 [2024-12-15 02:16:09.405716] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:20:44.709 [2024-12-15 02:16:09.405730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:20:44.709 [2024-12-15 02:16:09.405738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.709 [2024-12-15 02:16:09.452035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.709 [2024-12-15 02:16:09.452085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:44.709 [2024-12-15 02:16:09.452100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.709 [2024-12-15 02:16:09.452108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.709 [2024-12-15 02:16:09.452185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.709 [2024-12-15 02:16:09.452212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:44.709 [2024-12-15 02:16:09.452226] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.709 [2024-12-15 02:16:09.452235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.709 [2024-12-15 02:16:09.452334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.709 [2024-12-15 02:16:09.452347] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:44.709 [2024-12-15 02:16:09.452357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.709 [2024-12-15 02:16:09.452365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.709 [2024-12-15 02:16:09.452388] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.709 [2024-12-15 02:16:09.452397] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:44.709 [2024-12-15 02:16:09.452407] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.709 [2024-12-15 02:16:09.452417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.537740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.971 [2024-12-15 02:16:09.537800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:44.971 [2024-12-15 02:16:09.537816] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.971 [2024-12-15 02:16:09.537825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.607424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.971 [2024-12-15 02:16:09.607480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:44.971 [2024-12-15 02:16:09.607495] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.971 [2024-12-15 02:16:09.607508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.607604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.971 [2024-12-15 02:16:09.607615] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:44.971 [2024-12-15 02:16:09.607626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.971 [2024-12-15 02:16:09.607635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.607708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.971 [2024-12-15 02:16:09.607719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:44.971 [2024-12-15 02:16:09.607730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.971 [2024-12-15 02:16:09.607738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.607848] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.971 [2024-12-15 02:16:09.607858] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:44.971 [2024-12-15 02:16:09.607868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.971 [2024-12-15 02:16:09.607877] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.607913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.971 [2024-12-15 02:16:09.607923] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:20:44.971 [2024-12-15 02:16:09.607933] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.971 [2024-12-15 02:16:09.607940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.607987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.971 [2024-12-15 02:16:09.607999] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:44.971 [2024-12-15 02:16:09.608010] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.971 [2024-12-15 02:16:09.608018] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.608072] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:20:44.971 [2024-12-15 02:16:09.608094] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:44.971 [2024-12-15 02:16:09.608105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:20:44.971 [2024-12-15 02:16:09.608113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:44.971 [2024-12-15 02:16:09.608291] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 377.738 ms, result 0 00:20:44.971 true 00:20:44.971 02:16:09 ftl.ftl_restore -- ftl/restore.sh@66 -- # killprocess 79033 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79033 ']' 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79033 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@959 -- # uname 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 79033 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:20:44.971 killing process with pid 79033 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@972 -- # echo 'killing process with pid 79033' 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@973 -- # kill 79033 00:20:44.971 02:16:09 ftl.ftl_restore -- common/autotest_common.sh@978 -- # wait 79033 00:20:51.564 02:16:15 ftl.ftl_restore -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:20:55.776 262144+0 records in 00:20:55.776 262144+0 records out 00:20:55.776 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.41888 s, 243 MB/s 00:20:55.776 02:16:20 ftl.ftl_restore -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:20:57.690 02:16:22 ftl.ftl_restore -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:20:57.690 [2024-12-15 02:16:22.170630] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:20:57.690 [2024-12-15 02:16:22.171189] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79270 ] 00:20:57.690 [2024-12-15 02:16:22.324672] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:20:57.690 [2024-12-15 02:16:22.428838] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:20:58.265 [2024-12-15 02:16:22.722087] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.265 [2024-12-15 02:16:22.722180] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:20:58.265 [2024-12-15 02:16:22.883041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.883109] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:20:58.265 [2024-12-15 02:16:22.883125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:20:58.265 [2024-12-15 02:16:22.883133] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.883188] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.883219] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:20:58.265 [2024-12-15 02:16:22.883228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:58.265 [2024-12-15 02:16:22.883237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.883259] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:20:58.265 [2024-12-15 02:16:22.884089] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:20:58.265 [2024-12-15 02:16:22.884132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.884141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:20:58.265 [2024-12-15 02:16:22.884151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:20:58.265 [2024-12-15 02:16:22.884159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.885905] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:20:58.265 [2024-12-15 02:16:22.900142] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.900203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:20:58.265 [2024-12-15 02:16:22.900217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.239 ms 00:20:58.265 [2024-12-15 02:16:22.900226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.900310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.900321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:20:58.265 [2024-12-15 02:16:22.900330] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:20:58.265 [2024-12-15 02:16:22.900339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.908703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.908747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:20:58.265 [2024-12-15 02:16:22.908767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.286 ms 00:20:58.265 [2024-12-15 02:16:22.908777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.908856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.908866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:20:58.265 [2024-12-15 02:16:22.908875] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:20:58.265 [2024-12-15 02:16:22.908883] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.908928] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.908939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:20:58.265 [2024-12-15 02:16:22.908947] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:20:58.265 [2024-12-15 02:16:22.908956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.908983] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:20:58.265 [2024-12-15 02:16:22.913088] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.913128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:20:58.265 [2024-12-15 02:16:22.913141] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.111 ms 00:20:58.265 [2024-12-15 02:16:22.913149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.913203] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.913213] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:20:58.265 [2024-12-15 02:16:22.913222] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:20:58.265 [2024-12-15 02:16:22.913230] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.913282] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:20:58.265 [2024-12-15 02:16:22.913319] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:20:58.265 [2024-12-15 02:16:22.913357] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:20:58.265 [2024-12-15 02:16:22.913377] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:20:58.265 [2024-12-15 02:16:22.913485] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:20:58.265 [2024-12-15 02:16:22.913497] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:20:58.265 [2024-12-15 02:16:22.913508] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:20:58.265 [2024-12-15 02:16:22.913518] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:20:58.265 [2024-12-15 02:16:22.913527] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:20:58.265 [2024-12-15 02:16:22.913536] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:20:58.265 [2024-12-15 02:16:22.913544] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:20:58.265 [2024-12-15 02:16:22.913552] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:20:58.265 [2024-12-15 02:16:22.913563] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:20:58.265 [2024-12-15 02:16:22.913572] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.913579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:20:58.265 [2024-12-15 02:16:22.913587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.292 ms 00:20:58.265 [2024-12-15 02:16:22.913594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.913682] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.265 [2024-12-15 02:16:22.913692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:20:58.265 [2024-12-15 02:16:22.913700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:20:58.265 [2024-12-15 02:16:22.913708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.265 [2024-12-15 02:16:22.913809] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:20:58.265 [2024-12-15 02:16:22.913820] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:20:58.265 [2024-12-15 02:16:22.913828] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.265 [2024-12-15 02:16:22.913837] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.265 [2024-12-15 02:16:22.913845] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:20:58.265 [2024-12-15 02:16:22.913853] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:20:58.265 [2024-12-15 02:16:22.913861] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:20:58.265 [2024-12-15 02:16:22.913868] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:20:58.265 [2024-12-15 02:16:22.913876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:20:58.265 [2024-12-15 02:16:22.913883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.265 [2024-12-15 02:16:22.913889] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:20:58.265 [2024-12-15 02:16:22.913898] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:20:58.265 [2024-12-15 02:16:22.913905] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:20:58.266 [2024-12-15 02:16:22.913921] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:20:58.266 [2024-12-15 02:16:22.913928] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:20:58.266 [2024-12-15 02:16:22.913934] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.266 [2024-12-15 02:16:22.913942] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:20:58.266 [2024-12-15 02:16:22.913948] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:20:58.266 [2024-12-15 02:16:22.913955] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.266 [2024-12-15 02:16:22.913963] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:20:58.266 [2024-12-15 02:16:22.913970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:20:58.266 [2024-12-15 02:16:22.913977] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.266 [2024-12-15 02:16:22.913984] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:20:58.266 [2024-12-15 02:16:22.913991] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:20:58.266 [2024-12-15 02:16:22.913997] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.266 [2024-12-15 02:16:22.914004] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:20:58.266 [2024-12-15 02:16:22.914010] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:20:58.266 [2024-12-15 02:16:22.914018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.266 [2024-12-15 02:16:22.914024] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:20:58.266 [2024-12-15 02:16:22.914032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:20:58.266 [2024-12-15 02:16:22.914039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:20:58.266 [2024-12-15 02:16:22.914045] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:20:58.266 [2024-12-15 02:16:22.914053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:20:58.266 [2024-12-15 02:16:22.914060] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.266 [2024-12-15 02:16:22.914066] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:20:58.266 [2024-12-15 02:16:22.914072] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:20:58.266 [2024-12-15 02:16:22.914078] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:20:58.266 [2024-12-15 02:16:22.914086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:20:58.266 [2024-12-15 02:16:22.914092] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:20:58.266 [2024-12-15 02:16:22.914099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.266 [2024-12-15 02:16:22.914106] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:20:58.266 [2024-12-15 02:16:22.914112] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:20:58.266 [2024-12-15 02:16:22.914119] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.266 [2024-12-15 02:16:22.914129] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:20:58.266 [2024-12-15 02:16:22.914137] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:20:58.266 [2024-12-15 02:16:22.914145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:20:58.266 [2024-12-15 02:16:22.914153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:20:58.266 [2024-12-15 02:16:22.914161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:20:58.266 [2024-12-15 02:16:22.914168] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:20:58.266 [2024-12-15 02:16:22.914175] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:20:58.266 [2024-12-15 02:16:22.914182] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:20:58.266 [2024-12-15 02:16:22.914189] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:20:58.266 [2024-12-15 02:16:22.914210] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:20:58.266 [2024-12-15 02:16:22.914220] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:20:58.266 [2024-12-15 02:16:22.914230] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.266 [2024-12-15 02:16:22.914243] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:20:58.266 [2024-12-15 02:16:22.914251] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:20:58.266 [2024-12-15 02:16:22.914260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:20:58.266 [2024-12-15 02:16:22.914267] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:20:58.266 [2024-12-15 02:16:22.914275] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:20:58.266 [2024-12-15 02:16:22.914282] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:20:58.266 [2024-12-15 02:16:22.914289] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:20:58.266 [2024-12-15 02:16:22.914296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:20:58.266 [2024-12-15 02:16:22.914303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:20:58.266 [2024-12-15 02:16:22.914311] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:20:58.266 [2024-12-15 02:16:22.914318] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:20:58.266 [2024-12-15 02:16:22.914324] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:20:58.266 [2024-12-15 02:16:22.914332] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:20:58.266 [2024-12-15 02:16:22.914339] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:20:58.266 [2024-12-15 02:16:22.914347] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:20:58.266 [2024-12-15 02:16:22.914355] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:20:58.266 [2024-12-15 02:16:22.914363] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:20:58.266 [2024-12-15 02:16:22.914370] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:20:58.266 [2024-12-15 02:16:22.914377] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:20:58.266 [2024-12-15 02:16:22.914385] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:20:58.266 [2024-12-15 02:16:22.914394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.266 [2024-12-15 02:16:22.914402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:20:58.266 [2024-12-15 02:16:22.914410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.654 ms 00:20:58.266 [2024-12-15 02:16:22.914418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.266 [2024-12-15 02:16:22.946976] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.266 [2024-12-15 02:16:22.947026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:20:58.266 [2024-12-15 02:16:22.947039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.511 ms 00:20:58.266 [2024-12-15 02:16:22.947051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.266 [2024-12-15 02:16:22.947143] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.266 [2024-12-15 02:16:22.947153] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:20:58.266 [2024-12-15 02:16:22.947162] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:20:58.266 [2024-12-15 02:16:22.947171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.266 [2024-12-15 02:16:22.992820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.266 [2024-12-15 02:16:22.992874] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:20:58.266 [2024-12-15 02:16:22.992888] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.566 ms 00:20:58.266 [2024-12-15 02:16:22.992897] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.266 [2024-12-15 02:16:22.992949] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.266 [2024-12-15 02:16:22.992960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:20:58.266 [2024-12-15 02:16:22.992972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:20:58.266 [2024-12-15 02:16:22.992980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.266 [2024-12-15 02:16:22.993651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.266 [2024-12-15 02:16:22.993692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:20:58.266 [2024-12-15 02:16:22.993703] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.593 ms 00:20:58.266 [2024-12-15 02:16:22.993711] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.266 [2024-12-15 02:16:22.993887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.267 [2024-12-15 02:16:22.993898] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:20:58.267 [2024-12-15 02:16:22.993910] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.143 ms 00:20:58.267 [2024-12-15 02:16:22.993918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.267 [2024-12-15 02:16:23.009487] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.267 [2024-12-15 02:16:23.009531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:20:58.267 [2024-12-15 02:16:23.009543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.548 ms 00:20:58.267 [2024-12-15 02:16:23.009551] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.267 [2024-12-15 02:16:23.024115] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:20:58.267 [2024-12-15 02:16:23.024167] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:20:58.267 [2024-12-15 02:16:23.024182] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.267 [2024-12-15 02:16:23.024191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:20:58.267 [2024-12-15 02:16:23.024218] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.516 ms 00:20:58.267 [2024-12-15 02:16:23.024226] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.529 [2024-12-15 02:16:23.050052] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.529 [2024-12-15 02:16:23.050110] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:20:58.529 [2024-12-15 02:16:23.050122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.768 ms 00:20:58.529 [2024-12-15 02:16:23.050130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.529 [2024-12-15 02:16:23.062607] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.529 [2024-12-15 02:16:23.062655] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:20:58.529 [2024-12-15 02:16:23.062666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.423 ms 00:20:58.529 [2024-12-15 02:16:23.062675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.529 [2024-12-15 02:16:23.075284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.529 [2024-12-15 02:16:23.075330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:20:58.529 [2024-12-15 02:16:23.075342] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.561 ms 00:20:58.529 [2024-12-15 02:16:23.075349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.529 [2024-12-15 02:16:23.076000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.529 [2024-12-15 02:16:23.076031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:20:58.530 [2024-12-15 02:16:23.076042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.538 ms 00:20:58.530 [2024-12-15 02:16:23.076054] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.140259] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.530 [2024-12-15 02:16:23.140330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:20:58.530 [2024-12-15 02:16:23.140346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.184 ms 00:20:58.530 [2024-12-15 02:16:23.140362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.151334] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:20:58.530 [2024-12-15 02:16:23.154273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.530 [2024-12-15 02:16:23.154311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:20:58.530 [2024-12-15 02:16:23.154323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.854 ms 00:20:58.530 [2024-12-15 02:16:23.154332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.154414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.530 [2024-12-15 02:16:23.154426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:20:58.530 [2024-12-15 02:16:23.154435] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:20:58.530 [2024-12-15 02:16:23.154443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.154519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.530 [2024-12-15 02:16:23.154530] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:20:58.530 [2024-12-15 02:16:23.154539] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:20:58.530 [2024-12-15 02:16:23.154547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.154569] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.530 [2024-12-15 02:16:23.154578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:20:58.530 [2024-12-15 02:16:23.154587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:20:58.530 [2024-12-15 02:16:23.154595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.154632] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:20:58.530 [2024-12-15 02:16:23.154646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.530 [2024-12-15 02:16:23.154654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:20:58.530 [2024-12-15 02:16:23.154663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:20:58.530 [2024-12-15 02:16:23.154671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.181430] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.530 [2024-12-15 02:16:23.181483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:20:58.530 [2024-12-15 02:16:23.181497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.739 ms 00:20:58.530 [2024-12-15 02:16:23.181511] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.181597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:20:58.530 [2024-12-15 02:16:23.181607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:20:58.530 [2024-12-15 02:16:23.181617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:20:58.530 [2024-12-15 02:16:23.181625] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:20:58.530 [2024-12-15 02:16:23.183466] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.907 ms, result 0 00:20:59.476  [2024-12-15T02:16:25.229Z] Copying: 12/1024 [MB] (12 MBps) [2024-12-15T02:16:26.618Z] Copying: 24/1024 [MB] (11 MBps) [2024-12-15T02:16:27.562Z] Copying: 42/1024 [MB] (18 MBps) [2024-12-15T02:16:28.505Z] Copying: 63/1024 [MB] (20 MBps) [2024-12-15T02:16:29.448Z] Copying: 76/1024 [MB] (13 MBps) [2024-12-15T02:16:30.392Z] Copying: 92/1024 [MB] (15 MBps) [2024-12-15T02:16:31.336Z] Copying: 111/1024 [MB] (19 MBps) [2024-12-15T02:16:32.280Z] Copying: 128/1024 [MB] (17 MBps) [2024-12-15T02:16:33.226Z] Copying: 147/1024 [MB] (18 MBps) [2024-12-15T02:16:34.615Z] Copying: 164/1024 [MB] (17 MBps) [2024-12-15T02:16:35.561Z] Copying: 180/1024 [MB] (16 MBps) [2024-12-15T02:16:36.503Z] Copying: 193/1024 [MB] (13 MBps) [2024-12-15T02:16:37.437Z] Copying: 203/1024 [MB] (10 MBps) [2024-12-15T02:16:38.371Z] Copying: 253/1024 [MB] (49 MBps) [2024-12-15T02:16:39.313Z] Copying: 305/1024 [MB] (52 MBps) [2024-12-15T02:16:40.257Z] Copying: 340/1024 [MB] (34 MBps) [2024-12-15T02:16:41.201Z] Copying: 357/1024 [MB] (16 MBps) [2024-12-15T02:16:42.581Z] Copying: 375/1024 [MB] (17 MBps) [2024-12-15T02:16:43.525Z] Copying: 411/1024 [MB] (36 MBps) [2024-12-15T02:16:44.466Z] Copying: 430/1024 [MB] (19 MBps) [2024-12-15T02:16:45.408Z] Copying: 452/1024 [MB] (22 MBps) [2024-12-15T02:16:46.349Z] Copying: 467/1024 [MB] (14 MBps) [2024-12-15T02:16:47.292Z] Copying: 479/1024 [MB] (12 MBps) [2024-12-15T02:16:48.233Z] Copying: 497/1024 [MB] (18 MBps) [2024-12-15T02:16:49.619Z] Copying: 514/1024 [MB] (17 MBps) [2024-12-15T02:16:50.560Z] Copying: 529/1024 [MB] (14 MBps) [2024-12-15T02:16:51.533Z] Copying: 552/1024 [MB] (23 MBps) [2024-12-15T02:16:52.484Z] Copying: 568/1024 [MB] (16 MBps) [2024-12-15T02:16:53.426Z] Copying: 591/1024 [MB] (22 MBps) [2024-12-15T02:16:54.369Z] Copying: 611/1024 [MB] (20 MBps) [2024-12-15T02:16:55.312Z] Copying: 630/1024 [MB] (19 MBps) [2024-12-15T02:16:56.259Z] Copying: 649/1024 [MB] (18 MBps) [2024-12-15T02:16:57.207Z] Copying: 667/1024 [MB] (17 MBps) [2024-12-15T02:16:58.597Z] Copying: 681/1024 [MB] (13 MBps) [2024-12-15T02:16:59.541Z] Copying: 694/1024 [MB] (12 MBps) [2024-12-15T02:17:00.488Z] Copying: 715/1024 [MB] (21 MBps) [2024-12-15T02:17:01.432Z] Copying: 729/1024 [MB] (14 MBps) [2024-12-15T02:17:02.376Z] Copying: 743/1024 [MB] (13 MBps) [2024-12-15T02:17:03.321Z] Copying: 771456/1048576 [kB] (10224 kBps) [2024-12-15T02:17:04.267Z] Copying: 765/1024 [MB] (11 MBps) [2024-12-15T02:17:05.212Z] Copying: 783/1024 [MB] (18 MBps) [2024-12-15T02:17:06.597Z] Copying: 800/1024 [MB] (16 MBps) [2024-12-15T02:17:07.541Z] Copying: 819/1024 [MB] (19 MBps) [2024-12-15T02:17:08.486Z] Copying: 839/1024 [MB] (19 MBps) [2024-12-15T02:17:09.429Z] Copying: 861/1024 [MB] (22 MBps) [2024-12-15T02:17:10.375Z] Copying: 878/1024 [MB] (17 MBps) [2024-12-15T02:17:11.322Z] Copying: 889/1024 [MB] (10 MBps) [2024-12-15T02:17:12.266Z] Copying: 899/1024 [MB] (10 MBps) [2024-12-15T02:17:13.211Z] Copying: 931096/1048576 [kB] (10048 kBps) [2024-12-15T02:17:14.602Z] Copying: 920/1024 [MB] (11 MBps) [2024-12-15T02:17:15.547Z] Copying: 939/1024 [MB] (18 MBps) [2024-12-15T02:17:16.503Z] Copying: 956/1024 [MB] (17 MBps) [2024-12-15T02:17:17.474Z] Copying: 974/1024 [MB] (17 MBps) [2024-12-15T02:17:18.416Z] Copying: 1000/1024 [MB] (26 MBps) [2024-12-15T02:17:18.416Z] Copying: 1022/1024 [MB] (21 MBps) [2024-12-15T02:17:18.416Z] Copying: 1024/1024 [MB] (average 18 MBps)[2024-12-15 02:17:18.269342] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.651 [2024-12-15 02:17:18.269392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:21:53.651 [2024-12-15 02:17:18.269426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:53.651 [2024-12-15 02:17:18.269435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.651 [2024-12-15 02:17:18.269457] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:21:53.651 [2024-12-15 02:17:18.272298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.651 [2024-12-15 02:17:18.272335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:21:53.651 [2024-12-15 02:17:18.272346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.825 ms 00:21:53.651 [2024-12-15 02:17:18.272361] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.651 [2024-12-15 02:17:18.274228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.651 [2024-12-15 02:17:18.274275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:21:53.651 [2024-12-15 02:17:18.274285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.843 ms 00:21:53.651 [2024-12-15 02:17:18.274293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.651 [2024-12-15 02:17:18.294336] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.652 [2024-12-15 02:17:18.294381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:21:53.652 [2024-12-15 02:17:18.294393] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 20.026 ms 00:21:53.652 [2024-12-15 02:17:18.294401] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.652 [2024-12-15 02:17:18.300501] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.652 [2024-12-15 02:17:18.300542] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:21:53.652 [2024-12-15 02:17:18.300553] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.056 ms 00:21:53.652 [2024-12-15 02:17:18.300561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.652 [2024-12-15 02:17:18.326651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.652 [2024-12-15 02:17:18.326700] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:21:53.652 [2024-12-15 02:17:18.326712] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.039 ms 00:21:53.652 [2024-12-15 02:17:18.326719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.652 [2024-12-15 02:17:18.342077] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.652 [2024-12-15 02:17:18.342121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:21:53.652 [2024-12-15 02:17:18.342134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.313 ms 00:21:53.652 [2024-12-15 02:17:18.342142] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.652 [2024-12-15 02:17:18.342306] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.652 [2024-12-15 02:17:18.342324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:21:53.652 [2024-12-15 02:17:18.342334] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.118 ms 00:21:53.652 [2024-12-15 02:17:18.342342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.652 [2024-12-15 02:17:18.368277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.652 [2024-12-15 02:17:18.368316] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:21:53.652 [2024-12-15 02:17:18.368328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.919 ms 00:21:53.652 [2024-12-15 02:17:18.368336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.652 [2024-12-15 02:17:18.393207] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.652 [2024-12-15 02:17:18.393255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:21:53.652 [2024-12-15 02:17:18.393266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.827 ms 00:21:53.652 [2024-12-15 02:17:18.393273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.914 [2024-12-15 02:17:18.418038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.914 [2024-12-15 02:17:18.418100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:21:53.914 [2024-12-15 02:17:18.418112] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.721 ms 00:21:53.914 [2024-12-15 02:17:18.418120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.914 [2024-12-15 02:17:18.442331] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.914 [2024-12-15 02:17:18.442361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:21:53.915 [2024-12-15 02:17:18.442370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.106 ms 00:21:53.915 [2024-12-15 02:17:18.442378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.915 [2024-12-15 02:17:18.442407] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:21:53.915 [2024-12-15 02:17:18.442421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442451] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442487] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442494] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442502] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442509] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442516] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442603] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442610] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442617] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442682] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442696] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442723] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442730] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442738] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442744] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442751] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442766] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442773] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442794] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442810] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442818] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442826] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442833] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:21:53.915 [2024-12-15 02:17:18.442962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.442969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.442976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.442983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.442990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.442997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443124] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443138] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:21:53.916 [2024-12-15 02:17:18.443160] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:21:53.916 [2024-12-15 02:17:18.443171] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b824f819-dc72-458f-a122-460abb6a208d 00:21:53.916 [2024-12-15 02:17:18.443179] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:21:53.916 [2024-12-15 02:17:18.443186] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:21:53.916 [2024-12-15 02:17:18.443193] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:21:53.916 [2024-12-15 02:17:18.443210] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:21:53.916 [2024-12-15 02:17:18.443217] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:21:53.916 [2024-12-15 02:17:18.443230] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:21:53.916 [2024-12-15 02:17:18.443238] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:21:53.916 [2024-12-15 02:17:18.443244] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:21:53.916 [2024-12-15 02:17:18.443251] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:21:53.916 [2024-12-15 02:17:18.443258] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.916 [2024-12-15 02:17:18.443265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:21:53.916 [2024-12-15 02:17:18.443273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.852 ms 00:21:53.916 [2024-12-15 02:17:18.443280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.455593] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.916 [2024-12-15 02:17:18.455622] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:21:53.916 [2024-12-15 02:17:18.455632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.295 ms 00:21:53.916 [2024-12-15 02:17:18.455640] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.455987] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:53.916 [2024-12-15 02:17:18.456089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:21:53.916 [2024-12-15 02:17:18.456097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.321 ms 00:21:53.916 [2024-12-15 02:17:18.456110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.488872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.488905] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:53.916 [2024-12-15 02:17:18.488914] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.488922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.488972] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.488980] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:53.916 [2024-12-15 02:17:18.488988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.488998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.489047] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.489056] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:53.916 [2024-12-15 02:17:18.489064] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.489071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.489085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.489093] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:53.916 [2024-12-15 02:17:18.489101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.489108] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.565991] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.566033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:53.916 [2024-12-15 02:17:18.566044] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.566053] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.629913] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.629960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:53.916 [2024-12-15 02:17:18.629972] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.629986] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.630059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.630068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:53.916 [2024-12-15 02:17:18.630076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.630084] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.630118] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.630128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:53.916 [2024-12-15 02:17:18.630135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.630143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.630252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.630264] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:53.916 [2024-12-15 02:17:18.630272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.630280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.630310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.630318] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:21:53.916 [2024-12-15 02:17:18.630326] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.630334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.916 [2024-12-15 02:17:18.630369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.916 [2024-12-15 02:17:18.630382] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:53.916 [2024-12-15 02:17:18.630389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.916 [2024-12-15 02:17:18.630396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.917 [2024-12-15 02:17:18.630438] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:21:53.917 [2024-12-15 02:17:18.630448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:53.917 [2024-12-15 02:17:18.630457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:21:53.917 [2024-12-15 02:17:18.630464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:53.917 [2024-12-15 02:17:18.630584] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 361.217 ms, result 0 00:21:54.858 00:21:54.858 00:21:54.858 02:17:19 ftl.ftl_restore -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:21:54.858 [2024-12-15 02:17:19.538418] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:21:54.858 [2024-12-15 02:17:19.538582] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid79863 ] 00:21:55.119 [2024-12-15 02:17:19.703984] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:21:55.119 [2024-12-15 02:17:19.817756] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:21:55.380 [2024-12-15 02:17:20.115863] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:55.380 [2024-12-15 02:17:20.115949] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:21:55.645 [2024-12-15 02:17:20.277280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.277335] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:21:55.645 [2024-12-15 02:17:20.277351] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:55.645 [2024-12-15 02:17:20.277360] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.277442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.277456] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:21:55.645 [2024-12-15 02:17:20.277466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:21:55.645 [2024-12-15 02:17:20.277476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.277498] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:21:55.645 [2024-12-15 02:17:20.278289] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:21:55.645 [2024-12-15 02:17:20.278321] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.278330] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:21:55.645 [2024-12-15 02:17:20.278341] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.829 ms 00:21:55.645 [2024-12-15 02:17:20.278349] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.280038] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:21:55.645 [2024-12-15 02:17:20.293853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.293891] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:21:55.645 [2024-12-15 02:17:20.293903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.818 ms 00:21:55.645 [2024-12-15 02:17:20.293912] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.293993] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.294004] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:21:55.645 [2024-12-15 02:17:20.294013] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:21:55.645 [2024-12-15 02:17:20.294021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.302221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.302254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:21:55.645 [2024-12-15 02:17:20.302265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.124 ms 00:21:55.645 [2024-12-15 02:17:20.302279] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.302359] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.302368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:21:55.645 [2024-12-15 02:17:20.302377] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:21:55.645 [2024-12-15 02:17:20.302385] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.302428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.302438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:21:55.645 [2024-12-15 02:17:20.302447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:21:55.645 [2024-12-15 02:17:20.302454] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.302481] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:21:55.645 [2024-12-15 02:17:20.306472] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.306506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:21:55.645 [2024-12-15 02:17:20.306519] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.998 ms 00:21:55.645 [2024-12-15 02:17:20.306527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.306565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.645 [2024-12-15 02:17:20.306575] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:21:55.645 [2024-12-15 02:17:20.306587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:21:55.645 [2024-12-15 02:17:20.306595] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.645 [2024-12-15 02:17:20.306645] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:21:55.645 [2024-12-15 02:17:20.306670] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:21:55.645 [2024-12-15 02:17:20.306707] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:21:55.645 [2024-12-15 02:17:20.306726] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:21:55.645 [2024-12-15 02:17:20.306833] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:21:55.645 [2024-12-15 02:17:20.306844] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:21:55.645 [2024-12-15 02:17:20.306854] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:21:55.646 [2024-12-15 02:17:20.306865] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:21:55.646 [2024-12-15 02:17:20.306874] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:21:55.646 [2024-12-15 02:17:20.306882] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:21:55.646 [2024-12-15 02:17:20.306891] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:21:55.646 [2024-12-15 02:17:20.306901] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:21:55.646 [2024-12-15 02:17:20.306912] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:21:55.646 [2024-12-15 02:17:20.306920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.646 [2024-12-15 02:17:20.306928] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:21:55.646 [2024-12-15 02:17:20.306936] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.278 ms 00:21:55.646 [2024-12-15 02:17:20.306944] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.646 [2024-12-15 02:17:20.307028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.646 [2024-12-15 02:17:20.307046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:21:55.646 [2024-12-15 02:17:20.307054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:21:55.646 [2024-12-15 02:17:20.307062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.646 [2024-12-15 02:17:20.307165] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:21:55.646 [2024-12-15 02:17:20.307176] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:21:55.646 [2024-12-15 02:17:20.307186] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307208] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307218] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:21:55.646 [2024-12-15 02:17:20.307226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307235] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307242] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:21:55.646 [2024-12-15 02:17:20.307250] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307258] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.646 [2024-12-15 02:17:20.307265] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:21:55.646 [2024-12-15 02:17:20.307272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:21:55.646 [2024-12-15 02:17:20.307279] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:21:55.646 [2024-12-15 02:17:20.307294] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:21:55.646 [2024-12-15 02:17:20.307302] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:21:55.646 [2024-12-15 02:17:20.307309] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307316] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:21:55.646 [2024-12-15 02:17:20.307323] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307329] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:21:55.646 [2024-12-15 02:17:20.307344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:21:55.646 [2024-12-15 02:17:20.307366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307380] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:21:55.646 [2024-12-15 02:17:20.307386] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307393] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307400] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:21:55.646 [2024-12-15 02:17:20.307407] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307414] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:21:55.646 [2024-12-15 02:17:20.307428] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307435] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.646 [2024-12-15 02:17:20.307442] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:21:55.646 [2024-12-15 02:17:20.307449] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:21:55.646 [2024-12-15 02:17:20.307456] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:21:55.646 [2024-12-15 02:17:20.307464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:21:55.646 [2024-12-15 02:17:20.307472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:21:55.646 [2024-12-15 02:17:20.307479] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307487] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:21:55.646 [2024-12-15 02:17:20.307494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:21:55.646 [2024-12-15 02:17:20.307501] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307508] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:21:55.646 [2024-12-15 02:17:20.307516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:21:55.646 [2024-12-15 02:17:20.307524] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307532] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:21:55.646 [2024-12-15 02:17:20.307539] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:21:55.646 [2024-12-15 02:17:20.307546] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:21:55.646 [2024-12-15 02:17:20.307553] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:21:55.646 [2024-12-15 02:17:20.307560] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:21:55.646 [2024-12-15 02:17:20.307566] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:21:55.646 [2024-12-15 02:17:20.307572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:21:55.646 [2024-12-15 02:17:20.307581] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:21:55.646 [2024-12-15 02:17:20.307591] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.646 [2024-12-15 02:17:20.307605] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:21:55.646 [2024-12-15 02:17:20.307612] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:21:55.646 [2024-12-15 02:17:20.307619] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:21:55.646 [2024-12-15 02:17:20.307627] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:21:55.646 [2024-12-15 02:17:20.307635] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:21:55.646 [2024-12-15 02:17:20.307643] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:21:55.646 [2024-12-15 02:17:20.307650] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:21:55.646 [2024-12-15 02:17:20.307656] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:21:55.646 [2024-12-15 02:17:20.307663] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:21:55.646 [2024-12-15 02:17:20.307670] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:21:55.646 [2024-12-15 02:17:20.307677] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:21:55.646 [2024-12-15 02:17:20.307683] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:21:55.646 [2024-12-15 02:17:20.307691] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:21:55.646 [2024-12-15 02:17:20.307699] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:21:55.646 [2024-12-15 02:17:20.307706] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:21:55.646 [2024-12-15 02:17:20.307716] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:21:55.646 [2024-12-15 02:17:20.307724] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:21:55.647 [2024-12-15 02:17:20.307732] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:21:55.647 [2024-12-15 02:17:20.307739] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:21:55.647 [2024-12-15 02:17:20.307747] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:21:55.647 [2024-12-15 02:17:20.307754] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.647 [2024-12-15 02:17:20.307762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:21:55.647 [2024-12-15 02:17:20.307770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.659 ms 00:21:55.647 [2024-12-15 02:17:20.307777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.647 [2024-12-15 02:17:20.339863] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.647 [2024-12-15 02:17:20.339903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:21:55.647 [2024-12-15 02:17:20.339915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.043 ms 00:21:55.647 [2024-12-15 02:17:20.339927] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.647 [2024-12-15 02:17:20.340013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.647 [2024-12-15 02:17:20.340022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:21:55.647 [2024-12-15 02:17:20.340031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:21:55.647 [2024-12-15 02:17:20.340040] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.647 [2024-12-15 02:17:20.388192] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.647 [2024-12-15 02:17:20.388251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:21:55.647 [2024-12-15 02:17:20.388265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.090 ms 00:21:55.647 [2024-12-15 02:17:20.388274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.647 [2024-12-15 02:17:20.388323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.647 [2024-12-15 02:17:20.388334] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:21:55.647 [2024-12-15 02:17:20.388348] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:21:55.647 [2024-12-15 02:17:20.388356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.647 [2024-12-15 02:17:20.388966] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.647 [2024-12-15 02:17:20.389000] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:21:55.647 [2024-12-15 02:17:20.389011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:21:55.647 [2024-12-15 02:17:20.389021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.647 [2024-12-15 02:17:20.389177] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.647 [2024-12-15 02:17:20.389187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:21:55.647 [2024-12-15 02:17:20.389219] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:21:55.647 [2024-12-15 02:17:20.389228] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.647 [2024-12-15 02:17:20.405304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.647 [2024-12-15 02:17:20.405344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:21:55.647 [2024-12-15 02:17:20.405355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.054 ms 00:21:55.647 [2024-12-15 02:17:20.405363] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.420058] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:21:55.910 [2024-12-15 02:17:20.420097] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:21:55.910 [2024-12-15 02:17:20.420111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.420121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:21:55.910 [2024-12-15 02:17:20.420130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.625 ms 00:21:55.910 [2024-12-15 02:17:20.420138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.445693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.445734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:21:55.910 [2024-12-15 02:17:20.445747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.502 ms 00:21:55.910 [2024-12-15 02:17:20.445755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.458533] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.458572] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:21:55.910 [2024-12-15 02:17:20.458584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.713 ms 00:21:55.910 [2024-12-15 02:17:20.458591] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.471390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.471430] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:21:55.910 [2024-12-15 02:17:20.471442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.753 ms 00:21:55.910 [2024-12-15 02:17:20.471450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.472232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.472265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:21:55.910 [2024-12-15 02:17:20.472279] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:21:55.910 [2024-12-15 02:17:20.472288] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.535642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.535695] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:21:55.910 [2024-12-15 02:17:20.535716] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.334 ms 00:21:55.910 [2024-12-15 02:17:20.535727] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.546891] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:21:55.910 [2024-12-15 02:17:20.549869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.549906] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:21:55.910 [2024-12-15 02:17:20.549919] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.084 ms 00:21:55.910 [2024-12-15 02:17:20.549929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.550012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.550023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:21:55.910 [2024-12-15 02:17:20.550033] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:21:55.910 [2024-12-15 02:17:20.550044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.910 [2024-12-15 02:17:20.550116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.910 [2024-12-15 02:17:20.550127] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:21:55.911 [2024-12-15 02:17:20.550137] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:21:55.911 [2024-12-15 02:17:20.550146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.911 [2024-12-15 02:17:20.550167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.911 [2024-12-15 02:17:20.550176] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:21:55.911 [2024-12-15 02:17:20.550186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:21:55.911 [2024-12-15 02:17:20.550214] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.911 [2024-12-15 02:17:20.550255] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:21:55.911 [2024-12-15 02:17:20.550267] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.911 [2024-12-15 02:17:20.550275] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:21:55.911 [2024-12-15 02:17:20.550285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:21:55.911 [2024-12-15 02:17:20.550293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.911 [2024-12-15 02:17:20.576260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.911 [2024-12-15 02:17:20.576298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:21:55.911 [2024-12-15 02:17:20.576315] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.947 ms 00:21:55.911 [2024-12-15 02:17:20.576324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.911 [2024-12-15 02:17:20.576407] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:21:55.911 [2024-12-15 02:17:20.576417] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:21:55.911 [2024-12-15 02:17:20.576426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:21:55.911 [2024-12-15 02:17:20.576434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:21:55.911 [2024-12-15 02:17:20.577672] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 299.854 ms, result 0 00:21:57.295  [2024-12-15T02:17:23.003Z] Copying: 13/1024 [MB] (13 MBps) [2024-12-15T02:17:23.944Z] Copying: 25/1024 [MB] (11 MBps) [2024-12-15T02:17:24.888Z] Copying: 47/1024 [MB] (22 MBps) [2024-12-15T02:17:25.829Z] Copying: 68/1024 [MB] (20 MBps) [2024-12-15T02:17:26.773Z] Copying: 94/1024 [MB] (25 MBps) [2024-12-15T02:17:28.160Z] Copying: 109/1024 [MB] (15 MBps) [2024-12-15T02:17:29.104Z] Copying: 124/1024 [MB] (14 MBps) [2024-12-15T02:17:30.046Z] Copying: 135/1024 [MB] (10 MBps) [2024-12-15T02:17:30.990Z] Copying: 146/1024 [MB] (11 MBps) [2024-12-15T02:17:31.933Z] Copying: 161/1024 [MB] (15 MBps) [2024-12-15T02:17:32.878Z] Copying: 172/1024 [MB] (10 MBps) [2024-12-15T02:17:33.822Z] Copying: 184/1024 [MB] (11 MBps) [2024-12-15T02:17:35.204Z] Copying: 199/1024 [MB] (15 MBps) [2024-12-15T02:17:35.776Z] Copying: 217/1024 [MB] (17 MBps) [2024-12-15T02:17:37.164Z] Copying: 229/1024 [MB] (11 MBps) [2024-12-15T02:17:38.106Z] Copying: 248/1024 [MB] (18 MBps) [2024-12-15T02:17:39.047Z] Copying: 258/1024 [MB] (10 MBps) [2024-12-15T02:17:39.988Z] Copying: 269/1024 [MB] (10 MBps) [2024-12-15T02:17:40.932Z] Copying: 292/1024 [MB] (22 MBps) [2024-12-15T02:17:41.891Z] Copying: 312/1024 [MB] (20 MBps) [2024-12-15T02:17:42.868Z] Copying: 324/1024 [MB] (12 MBps) [2024-12-15T02:17:43.812Z] Copying: 341/1024 [MB] (16 MBps) [2024-12-15T02:17:44.773Z] Copying: 363/1024 [MB] (21 MBps) [2024-12-15T02:17:46.156Z] Copying: 384/1024 [MB] (20 MBps) [2024-12-15T02:17:47.101Z] Copying: 413/1024 [MB] (29 MBps) [2024-12-15T02:17:48.046Z] Copying: 433/1024 [MB] (19 MBps) [2024-12-15T02:17:48.988Z] Copying: 455/1024 [MB] (21 MBps) [2024-12-15T02:17:49.934Z] Copying: 477/1024 [MB] (22 MBps) [2024-12-15T02:17:50.877Z] Copying: 497/1024 [MB] (19 MBps) [2024-12-15T02:17:51.823Z] Copying: 517/1024 [MB] (19 MBps) [2024-12-15T02:17:52.767Z] Copying: 536/1024 [MB] (18 MBps) [2024-12-15T02:17:54.154Z] Copying: 546/1024 [MB] (10 MBps) [2024-12-15T02:17:55.098Z] Copying: 561/1024 [MB] (15 MBps) [2024-12-15T02:17:56.043Z] Copying: 577/1024 [MB] (15 MBps) [2024-12-15T02:17:56.985Z] Copying: 588/1024 [MB] (10 MBps) [2024-12-15T02:17:57.927Z] Copying: 598/1024 [MB] (10 MBps) [2024-12-15T02:17:58.869Z] Copying: 608/1024 [MB] (10 MBps) [2024-12-15T02:17:59.812Z] Copying: 619/1024 [MB] (10 MBps) [2024-12-15T02:18:01.197Z] Copying: 629/1024 [MB] (10 MBps) [2024-12-15T02:18:01.766Z] Copying: 654752/1048576 [kB] (10196 kBps) [2024-12-15T02:18:03.151Z] Copying: 649/1024 [MB] (10 MBps) [2024-12-15T02:18:04.092Z] Copying: 660/1024 [MB] (10 MBps) [2024-12-15T02:18:05.032Z] Copying: 670/1024 [MB] (10 MBps) [2024-12-15T02:18:05.975Z] Copying: 681/1024 [MB] (10 MBps) [2024-12-15T02:18:06.918Z] Copying: 691/1024 [MB] (10 MBps) [2024-12-15T02:18:07.861Z] Copying: 702/1024 [MB] (10 MBps) [2024-12-15T02:18:08.863Z] Copying: 712/1024 [MB] (10 MBps) [2024-12-15T02:18:09.804Z] Copying: 723/1024 [MB] (10 MBps) [2024-12-15T02:18:11.187Z] Copying: 733/1024 [MB] (10 MBps) [2024-12-15T02:18:12.130Z] Copying: 743/1024 [MB] (10 MBps) [2024-12-15T02:18:13.075Z] Copying: 754/1024 [MB] (10 MBps) [2024-12-15T02:18:14.021Z] Copying: 764/1024 [MB] (10 MBps) [2024-12-15T02:18:14.966Z] Copying: 775/1024 [MB] (10 MBps) [2024-12-15T02:18:15.945Z] Copying: 785/1024 [MB] (10 MBps) [2024-12-15T02:18:16.892Z] Copying: 796/1024 [MB] (10 MBps) [2024-12-15T02:18:17.834Z] Copying: 807/1024 [MB] (10 MBps) [2024-12-15T02:18:18.779Z] Copying: 822/1024 [MB] (15 MBps) [2024-12-15T02:18:20.166Z] Copying: 838/1024 [MB] (16 MBps) [2024-12-15T02:18:21.112Z] Copying: 851/1024 [MB] (12 MBps) [2024-12-15T02:18:22.056Z] Copying: 871/1024 [MB] (20 MBps) [2024-12-15T02:18:23.001Z] Copying: 884/1024 [MB] (13 MBps) [2024-12-15T02:18:23.944Z] Copying: 895/1024 [MB] (10 MBps) [2024-12-15T02:18:24.887Z] Copying: 907/1024 [MB] (11 MBps) [2024-12-15T02:18:25.829Z] Copying: 923/1024 [MB] (16 MBps) [2024-12-15T02:18:26.770Z] Copying: 941/1024 [MB] (17 MBps) [2024-12-15T02:18:28.158Z] Copying: 962/1024 [MB] (21 MBps) [2024-12-15T02:18:29.103Z] Copying: 980/1024 [MB] (17 MBps) [2024-12-15T02:18:30.048Z] Copying: 999/1024 [MB] (19 MBps) [2024-12-15T02:18:30.309Z] Copying: 1016/1024 [MB] (17 MBps) [2024-12-15T02:18:30.309Z] Copying: 1024/1024 [MB] (average 14 MBps)[2024-12-15 02:18:30.203581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.203674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:23:05.544 [2024-12-15 02:18:30.203692] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:05.544 [2024-12-15 02:18:30.203701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.544 [2024-12-15 02:18:30.203728] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:23:05.544 [2024-12-15 02:18:30.206901] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.206954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:23:05.544 [2024-12-15 02:18:30.206966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.155 ms 00:23:05.544 [2024-12-15 02:18:30.206975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.544 [2024-12-15 02:18:30.207237] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.207249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:23:05.544 [2024-12-15 02:18:30.207259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:23:05.544 [2024-12-15 02:18:30.207267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.544 [2024-12-15 02:18:30.210724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.210747] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:23:05.544 [2024-12-15 02:18:30.210758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.442 ms 00:23:05.544 [2024-12-15 02:18:30.210771] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.544 [2024-12-15 02:18:30.217242] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.217290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:23:05.544 [2024-12-15 02:18:30.217304] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.453 ms 00:23:05.544 [2024-12-15 02:18:30.217314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.544 [2024-12-15 02:18:30.246183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.246251] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:23:05.544 [2024-12-15 02:18:30.246265] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.783 ms 00:23:05.544 [2024-12-15 02:18:30.246273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.544 [2024-12-15 02:18:30.262004] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.262057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:23:05.544 [2024-12-15 02:18:30.262071] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.677 ms 00:23:05.544 [2024-12-15 02:18:30.262080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.544 [2024-12-15 02:18:30.262263] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.262277] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:23:05.544 [2024-12-15 02:18:30.262288] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:23:05.544 [2024-12-15 02:18:30.262297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.544 [2024-12-15 02:18:30.289521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.544 [2024-12-15 02:18:30.289579] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:23:05.544 [2024-12-15 02:18:30.289592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.208 ms 00:23:05.544 [2024-12-15 02:18:30.289600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.807 [2024-12-15 02:18:30.315811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.807 [2024-12-15 02:18:30.315859] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:23:05.807 [2024-12-15 02:18:30.315871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.160 ms 00:23:05.807 [2024-12-15 02:18:30.315879] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.807 [2024-12-15 02:18:30.341611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.807 [2024-12-15 02:18:30.341668] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:23:05.807 [2024-12-15 02:18:30.341682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.680 ms 00:23:05.807 [2024-12-15 02:18:30.341690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.807 [2024-12-15 02:18:30.367270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.807 [2024-12-15 02:18:30.367320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:23:05.807 [2024-12-15 02:18:30.367332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.483 ms 00:23:05.807 [2024-12-15 02:18:30.367340] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.807 [2024-12-15 02:18:30.367389] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:23:05.807 [2024-12-15 02:18:30.367415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367428] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367445] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367454] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367463] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367479] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367521] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367550] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367565] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367572] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:23:05.807 [2024-12-15 02:18:30.367579] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367594] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367601] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367616] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367631] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367688] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367772] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367787] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367802] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367809] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367824] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367832] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367847] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367854] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367884] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367895] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367903] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367917] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367932] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367946] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367953] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367961] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367976] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367983] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.367999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368013] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368021] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368036] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368051] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368059] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368066] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368103] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:23:05.808 [2024-12-15 02:18:30.368143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:23:05.809 [2024-12-15 02:18:30.368150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:23:05.809 [2024-12-15 02:18:30.368158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:23:05.809 [2024-12-15 02:18:30.368166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:23:05.809 [2024-12-15 02:18:30.368175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:23:05.809 [2024-12-15 02:18:30.368183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:23:05.809 [2024-12-15 02:18:30.368214] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:23:05.809 [2024-12-15 02:18:30.368222] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b824f819-dc72-458f-a122-460abb6a208d 00:23:05.809 [2024-12-15 02:18:30.368232] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:23:05.809 [2024-12-15 02:18:30.368240] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:23:05.809 [2024-12-15 02:18:30.368248] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:23:05.809 [2024-12-15 02:18:30.368256] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:23:05.809 [2024-12-15 02:18:30.368271] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:23:05.809 [2024-12-15 02:18:30.368280] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:23:05.809 [2024-12-15 02:18:30.368287] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:23:05.809 [2024-12-15 02:18:30.368294] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:23:05.809 [2024-12-15 02:18:30.368300] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:23:05.809 [2024-12-15 02:18:30.368307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.809 [2024-12-15 02:18:30.368315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:23:05.809 [2024-12-15 02:18:30.368335] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.919 ms 00:23:05.809 [2024-12-15 02:18:30.368345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.809 [2024-12-15 02:18:30.382094] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.809 [2024-12-15 02:18:30.382142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:23:05.809 [2024-12-15 02:18:30.382153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.713 ms 00:23:05.809 [2024-12-15 02:18:30.382161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.809 [2024-12-15 02:18:30.382577] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:05.809 [2024-12-15 02:18:30.382608] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:23:05.809 [2024-12-15 02:18:30.382626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.393 ms 00:23:05.809 [2024-12-15 02:18:30.382634] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.809 [2024-12-15 02:18:30.419538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.809 [2024-12-15 02:18:30.419593] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:05.809 [2024-12-15 02:18:30.419606] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.809 [2024-12-15 02:18:30.419616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.809 [2024-12-15 02:18:30.419688] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.809 [2024-12-15 02:18:30.419699] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:05.809 [2024-12-15 02:18:30.419714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.809 [2024-12-15 02:18:30.419723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.809 [2024-12-15 02:18:30.419810] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.809 [2024-12-15 02:18:30.419822] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:05.809 [2024-12-15 02:18:30.419832] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.809 [2024-12-15 02:18:30.419840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.809 [2024-12-15 02:18:30.419857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.809 [2024-12-15 02:18:30.419867] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:05.809 [2024-12-15 02:18:30.419876] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.809 [2024-12-15 02:18:30.419888] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:05.809 [2024-12-15 02:18:30.506108] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:05.809 [2024-12-15 02:18:30.506169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:05.809 [2024-12-15 02:18:30.506182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:05.809 [2024-12-15 02:18:30.506191] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.071 [2024-12-15 02:18:30.575592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.071 [2024-12-15 02:18:30.575650] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:06.071 [2024-12-15 02:18:30.575669] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.071 [2024-12-15 02:18:30.575678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.071 [2024-12-15 02:18:30.575738] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.071 [2024-12-15 02:18:30.575749] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:06.071 [2024-12-15 02:18:30.575758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.071 [2024-12-15 02:18:30.575766] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.071 [2024-12-15 02:18:30.575824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.071 [2024-12-15 02:18:30.575835] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:06.071 [2024-12-15 02:18:30.575843] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.071 [2024-12-15 02:18:30.575852] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.071 [2024-12-15 02:18:30.575957] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.071 [2024-12-15 02:18:30.575968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:06.071 [2024-12-15 02:18:30.575977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.071 [2024-12-15 02:18:30.575985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.071 [2024-12-15 02:18:30.576017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.071 [2024-12-15 02:18:30.576027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:23:06.071 [2024-12-15 02:18:30.576035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.071 [2024-12-15 02:18:30.576043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.071 [2024-12-15 02:18:30.576089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.072 [2024-12-15 02:18:30.576099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:06.072 [2024-12-15 02:18:30.576108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.072 [2024-12-15 02:18:30.576116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.072 [2024-12-15 02:18:30.576163] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:23:06.072 [2024-12-15 02:18:30.576173] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:06.072 [2024-12-15 02:18:30.576181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:23:06.072 [2024-12-15 02:18:30.576190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:06.072 [2024-12-15 02:18:30.576357] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.747 ms, result 0 00:23:06.645 00:23:06.645 00:23:06.645 02:18:31 ftl.ftl_restore -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:23:09.193 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:23:09.193 02:18:33 ftl.ftl_restore -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:23:09.193 [2024-12-15 02:18:33.619008] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:23:09.193 [2024-12-15 02:18:33.619142] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid80621 ] 00:23:09.193 [2024-12-15 02:18:33.780950] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:23:09.193 [2024-12-15 02:18:33.883068] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:23:09.454 [2024-12-15 02:18:34.177403] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.454 [2024-12-15 02:18:34.177499] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:23:09.736 [2024-12-15 02:18:34.339039] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.339113] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:23:09.736 [2024-12-15 02:18:34.339129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:09.736 [2024-12-15 02:18:34.339138] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.339212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.339227] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:23:09.736 [2024-12-15 02:18:34.339236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.055 ms 00:23:09.736 [2024-12-15 02:18:34.339244] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.339265] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:23:09.736 [2024-12-15 02:18:34.340295] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:23:09.736 [2024-12-15 02:18:34.340356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.340367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:23:09.736 [2024-12-15 02:18:34.340378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.096 ms 00:23:09.736 [2024-12-15 02:18:34.340386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.342936] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:23:09.736 [2024-12-15 02:18:34.357559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.357617] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:23:09.736 [2024-12-15 02:18:34.357633] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.626 ms 00:23:09.736 [2024-12-15 02:18:34.357641] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.357735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.357746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:23:09.736 [2024-12-15 02:18:34.357756] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.032 ms 00:23:09.736 [2024-12-15 02:18:34.357763] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.366997] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.367044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:23:09.736 [2024-12-15 02:18:34.367056] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.152 ms 00:23:09.736 [2024-12-15 02:18:34.367071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.367153] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.367163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:23:09.736 [2024-12-15 02:18:34.367172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:23:09.736 [2024-12-15 02:18:34.367180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.367246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.367257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:23:09.736 [2024-12-15 02:18:34.367266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:23:09.736 [2024-12-15 02:18:34.367275] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.367305] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:23:09.736 [2024-12-15 02:18:34.371489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.371532] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:23:09.736 [2024-12-15 02:18:34.371546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.192 ms 00:23:09.736 [2024-12-15 02:18:34.371555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.371594] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.736 [2024-12-15 02:18:34.371604] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:23:09.736 [2024-12-15 02:18:34.371614] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:23:09.736 [2024-12-15 02:18:34.371622] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.736 [2024-12-15 02:18:34.371676] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:23:09.736 [2024-12-15 02:18:34.371700] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:23:09.736 [2024-12-15 02:18:34.371739] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:23:09.736 [2024-12-15 02:18:34.371758] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:23:09.736 [2024-12-15 02:18:34.371866] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:23:09.736 [2024-12-15 02:18:34.371878] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:23:09.736 [2024-12-15 02:18:34.371890] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:23:09.736 [2024-12-15 02:18:34.371900] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:23:09.736 [2024-12-15 02:18:34.371910] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:23:09.736 [2024-12-15 02:18:34.371918] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:23:09.736 [2024-12-15 02:18:34.371927] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:23:09.736 [2024-12-15 02:18:34.371936] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:23:09.736 [2024-12-15 02:18:34.371947] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:23:09.736 [2024-12-15 02:18:34.371955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.737 [2024-12-15 02:18:34.371963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:23:09.737 [2024-12-15 02:18:34.371971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.282 ms 00:23:09.737 [2024-12-15 02:18:34.371978] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.737 [2024-12-15 02:18:34.372061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.737 [2024-12-15 02:18:34.372078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:23:09.737 [2024-12-15 02:18:34.372086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:23:09.737 [2024-12-15 02:18:34.372093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.737 [2024-12-15 02:18:34.372222] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:23:09.737 [2024-12-15 02:18:34.372235] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:23:09.737 [2024-12-15 02:18:34.372243] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372260] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:23:09.737 [2024-12-15 02:18:34.372267] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372281] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:23:09.737 [2024-12-15 02:18:34.372293] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372301] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.737 [2024-12-15 02:18:34.372308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:23:09.737 [2024-12-15 02:18:34.372314] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:23:09.737 [2024-12-15 02:18:34.372321] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:23:09.737 [2024-12-15 02:18:34.372337] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:23:09.737 [2024-12-15 02:18:34.372344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:23:09.737 [2024-12-15 02:18:34.372351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372358] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:23:09.737 [2024-12-15 02:18:34.372366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372381] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:23:09.737 [2024-12-15 02:18:34.372388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372402] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:23:09.737 [2024-12-15 02:18:34.372410] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372416] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372423] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:23:09.737 [2024-12-15 02:18:34.372430] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372436] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372443] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:23:09.737 [2024-12-15 02:18:34.372450] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372457] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372464] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:23:09.737 [2024-12-15 02:18:34.372472] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372478] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.737 [2024-12-15 02:18:34.372485] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:23:09.737 [2024-12-15 02:18:34.372492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:23:09.737 [2024-12-15 02:18:34.372498] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:23:09.737 [2024-12-15 02:18:34.372505] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:23:09.737 [2024-12-15 02:18:34.372511] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:23:09.737 [2024-12-15 02:18:34.372518] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372526] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:23:09.737 [2024-12-15 02:18:34.372533] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:23:09.737 [2024-12-15 02:18:34.372540] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372547] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:23:09.737 [2024-12-15 02:18:34.372556] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:23:09.737 [2024-12-15 02:18:34.372564] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372572] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:23:09.737 [2024-12-15 02:18:34.372579] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:23:09.737 [2024-12-15 02:18:34.372587] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:23:09.737 [2024-12-15 02:18:34.372593] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:23:09.737 [2024-12-15 02:18:34.372600] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:23:09.737 [2024-12-15 02:18:34.372606] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:23:09.737 [2024-12-15 02:18:34.372613] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:23:09.737 [2024-12-15 02:18:34.372621] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:23:09.737 [2024-12-15 02:18:34.372630] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.737 [2024-12-15 02:18:34.372642] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:23:09.737 [2024-12-15 02:18:34.372651] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:23:09.737 [2024-12-15 02:18:34.372659] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:23:09.737 [2024-12-15 02:18:34.372665] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:23:09.737 [2024-12-15 02:18:34.372673] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:23:09.737 [2024-12-15 02:18:34.372679] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:23:09.737 [2024-12-15 02:18:34.372687] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:23:09.737 [2024-12-15 02:18:34.372694] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:23:09.737 [2024-12-15 02:18:34.372701] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:23:09.737 [2024-12-15 02:18:34.372708] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:23:09.737 [2024-12-15 02:18:34.372715] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:23:09.737 [2024-12-15 02:18:34.372722] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:23:09.737 [2024-12-15 02:18:34.372730] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:23:09.737 [2024-12-15 02:18:34.372737] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:23:09.737 [2024-12-15 02:18:34.372745] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:23:09.737 [2024-12-15 02:18:34.372753] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:23:09.737 [2024-12-15 02:18:34.372762] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:23:09.737 [2024-12-15 02:18:34.372770] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:23:09.737 [2024-12-15 02:18:34.372778] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:23:09.737 [2024-12-15 02:18:34.372785] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:23:09.737 [2024-12-15 02:18:34.372793] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.737 [2024-12-15 02:18:34.372800] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:23:09.737 [2024-12-15 02:18:34.372808] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.666 ms 00:23:09.737 [2024-12-15 02:18:34.372816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.737 [2024-12-15 02:18:34.405417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.738 [2024-12-15 02:18:34.405469] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:23:09.738 [2024-12-15 02:18:34.405481] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.555 ms 00:23:09.738 [2024-12-15 02:18:34.405494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.738 [2024-12-15 02:18:34.405598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.738 [2024-12-15 02:18:34.405606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:23:09.738 [2024-12-15 02:18:34.405615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:23:09.738 [2024-12-15 02:18:34.405623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.738 [2024-12-15 02:18:34.450999] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.738 [2024-12-15 02:18:34.451058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:23:09.738 [2024-12-15 02:18:34.451073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 45.313 ms 00:23:09.738 [2024-12-15 02:18:34.451082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.738 [2024-12-15 02:18:34.451133] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.738 [2024-12-15 02:18:34.451144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:23:09.738 [2024-12-15 02:18:34.451158] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:23:09.738 [2024-12-15 02:18:34.451166] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.738 [2024-12-15 02:18:34.451820] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.738 [2024-12-15 02:18:34.451862] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:23:09.738 [2024-12-15 02:18:34.451873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:23:09.738 [2024-12-15 02:18:34.451882] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.738 [2024-12-15 02:18:34.452046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.738 [2024-12-15 02:18:34.452057] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:23:09.738 [2024-12-15 02:18:34.452069] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.131 ms 00:23:09.738 [2024-12-15 02:18:34.452077] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.738 [2024-12-15 02:18:34.467916] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.738 [2024-12-15 02:18:34.467968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:23:09.738 [2024-12-15 02:18:34.467980] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.819 ms 00:23:09.738 [2024-12-15 02:18:34.467987] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:09.738 [2024-12-15 02:18:34.482557] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:23:09.738 [2024-12-15 02:18:34.482605] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:23:09.738 [2024-12-15 02:18:34.482619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:09.738 [2024-12-15 02:18:34.482628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:23:09.738 [2024-12-15 02:18:34.482639] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.519 ms 00:23:09.738 [2024-12-15 02:18:34.482646] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.509139] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.509193] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:23:10.010 [2024-12-15 02:18:34.509215] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.435 ms 00:23:10.010 [2024-12-15 02:18:34.509223] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.522447] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.522497] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:23:10.010 [2024-12-15 02:18:34.522509] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.154 ms 00:23:10.010 [2024-12-15 02:18:34.522517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.535410] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.535457] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:23:10.010 [2024-12-15 02:18:34.535470] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.843 ms 00:23:10.010 [2024-12-15 02:18:34.535478] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.536126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.536158] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:23:10.010 [2024-12-15 02:18:34.536172] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.530 ms 00:23:10.010 [2024-12-15 02:18:34.536180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.602247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.602308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:23:10.010 [2024-12-15 02:18:34.602332] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 66.029 ms 00:23:10.010 [2024-12-15 02:18:34.602342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.614154] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:23:10.010 [2024-12-15 02:18:34.617635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.617682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:23:10.010 [2024-12-15 02:18:34.617697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.229 ms 00:23:10.010 [2024-12-15 02:18:34.617707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.617809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.617821] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:23:10.010 [2024-12-15 02:18:34.617831] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.020 ms 00:23:10.010 [2024-12-15 02:18:34.617843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.617917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.617929] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:23:10.010 [2024-12-15 02:18:34.617939] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:23:10.010 [2024-12-15 02:18:34.617947] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.617969] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.617979] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:23:10.010 [2024-12-15 02:18:34.617987] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:23:10.010 [2024-12-15 02:18:34.617995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.618034] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:23:10.010 [2024-12-15 02:18:34.618046] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.618055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:23:10.010 [2024-12-15 02:18:34.618065] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:23:10.010 [2024-12-15 02:18:34.618073] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.644371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.644428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:23:10.010 [2024-12-15 02:18:34.644448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.277 ms 00:23:10.010 [2024-12-15 02:18:34.644457] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.644549] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:23:10.010 [2024-12-15 02:18:34.644559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:23:10.010 [2024-12-15 02:18:34.644569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:23:10.010 [2024-12-15 02:18:34.644578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:23:10.010 [2024-12-15 02:18:34.646065] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 306.509 ms, result 0 00:23:10.952  [2024-12-15T02:18:36.662Z] Copying: 20/1024 [MB] (20 MBps) [2024-12-15T02:18:38.034Z] Copying: 36/1024 [MB] (15 MBps) [2024-12-15T02:18:38.969Z] Copying: 87/1024 [MB] (51 MBps) [2024-12-15T02:18:39.913Z] Copying: 141/1024 [MB] (53 MBps) [2024-12-15T02:18:40.857Z] Copying: 166/1024 [MB] (25 MBps) [2024-12-15T02:18:41.798Z] Copying: 186/1024 [MB] (20 MBps) [2024-12-15T02:18:42.742Z] Copying: 209/1024 [MB] (22 MBps) [2024-12-15T02:18:43.687Z] Copying: 229/1024 [MB] (20 MBps) [2024-12-15T02:18:45.069Z] Copying: 242/1024 [MB] (13 MBps) [2024-12-15T02:18:46.014Z] Copying: 259/1024 [MB] (16 MBps) [2024-12-15T02:18:46.953Z] Copying: 276/1024 [MB] (17 MBps) [2024-12-15T02:18:47.894Z] Copying: 292944/1048576 [kB] (10072 kBps) [2024-12-15T02:18:48.835Z] Copying: 303076/1048576 [kB] (10132 kBps) [2024-12-15T02:18:49.776Z] Copying: 313024/1048576 [kB] (9948 kBps) [2024-12-15T02:18:50.717Z] Copying: 318/1024 [MB] (12 MBps) [2024-12-15T02:18:52.101Z] Copying: 328/1024 [MB] (10 MBps) [2024-12-15T02:18:52.671Z] Copying: 346008/1048576 [kB] (9928 kBps) [2024-12-15T02:18:54.049Z] Copying: 356012/1048576 [kB] (10004 kBps) [2024-12-15T02:18:54.983Z] Copying: 361/1024 [MB] (13 MBps) [2024-12-15T02:18:55.920Z] Copying: 394/1024 [MB] (33 MBps) [2024-12-15T02:18:56.852Z] Copying: 427/1024 [MB] (32 MBps) [2024-12-15T02:18:57.785Z] Copying: 456/1024 [MB] (29 MBps) [2024-12-15T02:18:58.717Z] Copying: 485/1024 [MB] (29 MBps) [2024-12-15T02:19:00.089Z] Copying: 531/1024 [MB] (46 MBps) [2024-12-15T02:19:00.704Z] Copying: 574/1024 [MB] (42 MBps) [2024-12-15T02:19:02.091Z] Copying: 610/1024 [MB] (36 MBps) [2024-12-15T02:19:02.660Z] Copying: 631/1024 [MB] (20 MBps) [2024-12-15T02:19:04.084Z] Copying: 652/1024 [MB] (20 MBps) [2024-12-15T02:19:05.025Z] Copying: 669/1024 [MB] (17 MBps) [2024-12-15T02:19:05.970Z] Copying: 687/1024 [MB] (18 MBps) [2024-12-15T02:19:06.914Z] Copying: 704/1024 [MB] (16 MBps) [2024-12-15T02:19:07.860Z] Copying: 722/1024 [MB] (18 MBps) [2024-12-15T02:19:08.806Z] Copying: 735/1024 [MB] (12 MBps) [2024-12-15T02:19:09.750Z] Copying: 763300/1048576 [kB] (10188 kBps) [2024-12-15T02:19:10.694Z] Copying: 773428/1048576 [kB] (10128 kBps) [2024-12-15T02:19:12.077Z] Copying: 765/1024 [MB] (10 MBps) [2024-12-15T02:19:13.021Z] Copying: 775/1024 [MB] (10 MBps) [2024-12-15T02:19:13.958Z] Copying: 785/1024 [MB] (10 MBps) [2024-12-15T02:19:14.903Z] Copying: 811/1024 [MB] (25 MBps) [2024-12-15T02:19:15.845Z] Copying: 824/1024 [MB] (13 MBps) [2024-12-15T02:19:16.788Z] Copying: 836/1024 [MB] (11 MBps) [2024-12-15T02:19:17.721Z] Copying: 856/1024 [MB] (19 MBps) [2024-12-15T02:19:18.665Z] Copying: 903/1024 [MB] (47 MBps) [2024-12-15T02:19:20.051Z] Copying: 923/1024 [MB] (20 MBps) [2024-12-15T02:19:20.995Z] Copying: 939/1024 [MB] (15 MBps) [2024-12-15T02:19:21.939Z] Copying: 952/1024 [MB] (12 MBps) [2024-12-15T02:19:22.883Z] Copying: 984904/1048576 [kB] (10048 kBps) [2024-12-15T02:19:23.817Z] Copying: 995000/1048576 [kB] (10096 kBps) [2024-12-15T02:19:24.750Z] Copying: 990/1024 [MB] (18 MBps) [2024-12-15T02:19:25.008Z] Copying: 1016/1024 [MB] (26 MBps) [2024-12-15T02:19:25.008Z] Copying: 1024/1024 [MB] (average 20 MBps)[2024-12-15 02:19:24.909029] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.243 [2024-12-15 02:19:24.909066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:24:00.243 [2024-12-15 02:19:24.909076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:24:00.243 [2024-12-15 02:19:24.909083] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.243 [2024-12-15 02:19:24.909099] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:24:00.243 [2024-12-15 02:19:24.911272] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.243 [2024-12-15 02:19:24.911304] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:24:00.243 [2024-12-15 02:19:24.911313] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.162 ms 00:24:00.243 [2024-12-15 02:19:24.911319] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.243 [2024-12-15 02:19:24.912730] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.243 [2024-12-15 02:19:24.912759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:24:00.243 [2024-12-15 02:19:24.912767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.394 ms 00:24:00.243 [2024-12-15 02:19:24.912773] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.243 [2024-12-15 02:19:24.926992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.244 [2024-12-15 02:19:24.927020] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:24:00.244 [2024-12-15 02:19:24.927029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.207 ms 00:24:00.244 [2024-12-15 02:19:24.927039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.244 [2024-12-15 02:19:24.932106] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.244 [2024-12-15 02:19:24.932130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:24:00.244 [2024-12-15 02:19:24.932139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.049 ms 00:24:00.244 [2024-12-15 02:19:24.932145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.244 [2024-12-15 02:19:24.950602] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.244 [2024-12-15 02:19:24.950629] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:24:00.244 [2024-12-15 02:19:24.950637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.425 ms 00:24:00.244 [2024-12-15 02:19:24.950643] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.244 [2024-12-15 02:19:24.962239] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.244 [2024-12-15 02:19:24.962266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:24:00.244 [2024-12-15 02:19:24.962275] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.569 ms 00:24:00.244 [2024-12-15 02:19:24.962281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.244 [2024-12-15 02:19:24.963079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.244 [2024-12-15 02:19:24.963102] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:24:00.244 [2024-12-15 02:19:24.963109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.766 ms 00:24:00.244 [2024-12-15 02:19:24.963115] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.244 [2024-12-15 02:19:24.981022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.244 [2024-12-15 02:19:24.981049] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:24:00.244 [2024-12-15 02:19:24.981057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.896 ms 00:24:00.244 [2024-12-15 02:19:24.981062] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.244 [2024-12-15 02:19:24.998811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.244 [2024-12-15 02:19:24.998838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:24:00.244 [2024-12-15 02:19:24.998845] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.724 ms 00:24:00.244 [2024-12-15 02:19:24.998850] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.505 [2024-12-15 02:19:25.015847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.505 [2024-12-15 02:19:25.015873] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:24:00.505 [2024-12-15 02:19:25.015881] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.971 ms 00:24:00.505 [2024-12-15 02:19:25.015886] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.505 [2024-12-15 02:19:25.033109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.505 [2024-12-15 02:19:25.033136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:24:00.505 [2024-12-15 02:19:25.033144] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.182 ms 00:24:00.505 [2024-12-15 02:19:25.033149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.505 [2024-12-15 02:19:25.033173] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:24:00.505 [2024-12-15 02:19:25.033187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 512 / 261120 wr_cnt: 1 state: open 00:24:00.505 [2024-12-15 02:19:25.033203] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033210] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033216] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033227] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033256] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033261] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033267] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033285] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033301] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033307] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033312] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033317] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033333] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033362] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:24:00.505 [2024-12-15 02:19:25.033367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033373] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033437] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033469] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033496] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033528] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033535] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033540] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033557] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033596] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033602] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033618] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033624] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033670] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033675] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033681] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033703] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033709] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033714] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033737] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:24:00.506 [2024-12-15 02:19:25.033748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:24:00.507 [2024-12-15 02:19:25.033754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:24:00.507 [2024-12-15 02:19:25.033760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:24:00.507 [2024-12-15 02:19:25.033772] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:24:00.507 [2024-12-15 02:19:25.033778] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b824f819-dc72-458f-a122-460abb6a208d 00:24:00.507 [2024-12-15 02:19:25.033784] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 512 00:24:00.507 [2024-12-15 02:19:25.033789] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 1472 00:24:00.507 [2024-12-15 02:19:25.033794] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 512 00:24:00.507 [2024-12-15 02:19:25.033800] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 2.8750 00:24:00.507 [2024-12-15 02:19:25.033810] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:24:00.507 [2024-12-15 02:19:25.033816] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:24:00.507 [2024-12-15 02:19:25.033822] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:24:00.507 [2024-12-15 02:19:25.033826] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:24:00.507 [2024-12-15 02:19:25.033831] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:24:00.507 [2024-12-15 02:19:25.033836] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.507 [2024-12-15 02:19:25.033842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:24:00.507 [2024-12-15 02:19:25.033847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.664 ms 00:24:00.507 [2024-12-15 02:19:25.033854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.043247] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.507 [2024-12-15 02:19:25.043272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:24:00.507 [2024-12-15 02:19:25.043280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.380 ms 00:24:00.507 [2024-12-15 02:19:25.043285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.043550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:00.507 [2024-12-15 02:19:25.043570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:24:00.507 [2024-12-15 02:19:25.043576] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.252 ms 00:24:00.507 [2024-12-15 02:19:25.043581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.069287] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.069314] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:00.507 [2024-12-15 02:19:25.069322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.069328] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.069366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.069375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:00.507 [2024-12-15 02:19:25.069380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.069386] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.069436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.069444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:00.507 [2024-12-15 02:19:25.069449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.069455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.069465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.069471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:00.507 [2024-12-15 02:19:25.069479] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.069485] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.129309] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.129342] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:00.507 [2024-12-15 02:19:25.129350] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.129356] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.177980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.178017] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:00.507 [2024-12-15 02:19:25.178025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.178032] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.178069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.178076] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:00.507 [2024-12-15 02:19:25.178082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.178088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.178126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.178132] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:00.507 [2024-12-15 02:19:25.178138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.178146] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.178226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.178234] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:00.507 [2024-12-15 02:19:25.178240] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.178246] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.178269] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.178276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:24:00.507 [2024-12-15 02:19:25.178282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.178287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.178317] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.178324] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:00.507 [2024-12-15 02:19:25.178329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.178335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.178367] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:24:00.507 [2024-12-15 02:19:25.178374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:00.507 [2024-12-15 02:19:25.178380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:24:00.507 [2024-12-15 02:19:25.178387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:00.507 [2024-12-15 02:19:25.178474] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 269.426 ms, result 0 00:24:01.121 00:24:01.121 00:24:01.121 02:19:25 ftl.ftl_restore -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:24:01.407 [2024-12-15 02:19:25.866958] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:24:01.407 [2024-12-15 02:19:25.867094] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81154 ] 00:24:01.407 [2024-12-15 02:19:26.020669] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:24:01.407 [2024-12-15 02:19:26.095505] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:24:01.667 [2024-12-15 02:19:26.305352] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:01.667 [2024-12-15 02:19:26.305404] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:24:01.929 [2024-12-15 02:19:26.452859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.929 [2024-12-15 02:19:26.452904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:24:01.929 [2024-12-15 02:19:26.452917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:24:01.929 [2024-12-15 02:19:26.452926] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.929 [2024-12-15 02:19:26.452971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.929 [2024-12-15 02:19:26.452983] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:24:01.929 [2024-12-15 02:19:26.452991] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:24:01.929 [2024-12-15 02:19:26.452998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.929 [2024-12-15 02:19:26.453015] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:24:01.929 [2024-12-15 02:19:26.453736] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:24:01.929 [2024-12-15 02:19:26.453760] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.929 [2024-12-15 02:19:26.453767] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:24:01.929 [2024-12-15 02:19:26.453776] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.749 ms 00:24:01.929 [2024-12-15 02:19:26.453783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.929 [2024-12-15 02:19:26.454850] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:24:01.929 [2024-12-15 02:19:26.467420] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.929 [2024-12-15 02:19:26.467452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:24:01.929 [2024-12-15 02:19:26.467463] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.571 ms 00:24:01.929 [2024-12-15 02:19:26.467470] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.929 [2024-12-15 02:19:26.467529] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.930 [2024-12-15 02:19:26.467539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:24:01.930 [2024-12-15 02:19:26.467547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:24:01.930 [2024-12-15 02:19:26.467554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.930 [2024-12-15 02:19:26.472603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.930 [2024-12-15 02:19:26.472634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:24:01.930 [2024-12-15 02:19:26.472643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.999 ms 00:24:01.930 [2024-12-15 02:19:26.472654] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.930 [2024-12-15 02:19:26.472720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.930 [2024-12-15 02:19:26.472728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:24:01.930 [2024-12-15 02:19:26.472736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:24:01.930 [2024-12-15 02:19:26.472743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.930 [2024-12-15 02:19:26.472789] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.930 [2024-12-15 02:19:26.472798] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:24:01.930 [2024-12-15 02:19:26.472806] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:01.930 [2024-12-15 02:19:26.472813] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.930 [2024-12-15 02:19:26.472836] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:24:01.930 [2024-12-15 02:19:26.476189] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.930 [2024-12-15 02:19:26.476225] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:24:01.930 [2024-12-15 02:19:26.476236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.357 ms 00:24:01.930 [2024-12-15 02:19:26.476243] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.930 [2024-12-15 02:19:26.476273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.930 [2024-12-15 02:19:26.476281] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:24:01.930 [2024-12-15 02:19:26.476289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:24:01.930 [2024-12-15 02:19:26.476296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.930 [2024-12-15 02:19:26.476314] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:24:01.930 [2024-12-15 02:19:26.476333] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:24:01.930 [2024-12-15 02:19:26.476367] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:24:01.930 [2024-12-15 02:19:26.476384] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:24:01.930 [2024-12-15 02:19:26.476486] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:24:01.930 [2024-12-15 02:19:26.476496] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:24:01.930 [2024-12-15 02:19:26.476506] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:24:01.930 [2024-12-15 02:19:26.476515] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:24:01.930 [2024-12-15 02:19:26.476525] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:24:01.930 [2024-12-15 02:19:26.476532] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:24:01.930 [2024-12-15 02:19:26.476539] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:24:01.930 [2024-12-15 02:19:26.476546] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:24:01.930 [2024-12-15 02:19:26.476556] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:24:01.930 [2024-12-15 02:19:26.476564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.930 [2024-12-15 02:19:26.476571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:24:01.930 [2024-12-15 02:19:26.476578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.251 ms 00:24:01.930 [2024-12-15 02:19:26.476585] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.930 [2024-12-15 02:19:26.476667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.930 [2024-12-15 02:19:26.476675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:24:01.930 [2024-12-15 02:19:26.476683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:24:01.930 [2024-12-15 02:19:26.476690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.930 [2024-12-15 02:19:26.476799] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:24:01.930 [2024-12-15 02:19:26.476816] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:24:01.930 [2024-12-15 02:19:26.476824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:01.930 [2024-12-15 02:19:26.476832] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.930 [2024-12-15 02:19:26.476839] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:24:01.930 [2024-12-15 02:19:26.476846] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:24:01.930 [2024-12-15 02:19:26.476853] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:24:01.930 [2024-12-15 02:19:26.476859] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:24:01.930 [2024-12-15 02:19:26.476866] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:24:01.930 [2024-12-15 02:19:26.476873] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:01.930 [2024-12-15 02:19:26.476880] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:24:01.930 [2024-12-15 02:19:26.476887] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:24:01.930 [2024-12-15 02:19:26.476894] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:24:01.930 [2024-12-15 02:19:26.476907] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:24:01.930 [2024-12-15 02:19:26.476913] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:24:01.930 [2024-12-15 02:19:26.476920] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.930 [2024-12-15 02:19:26.476926] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:24:01.930 [2024-12-15 02:19:26.476933] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:24:01.930 [2024-12-15 02:19:26.476940] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.930 [2024-12-15 02:19:26.476946] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:24:01.930 [2024-12-15 02:19:26.476953] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:24:01.930 [2024-12-15 02:19:26.476959] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.930 [2024-12-15 02:19:26.476966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:24:01.930 [2024-12-15 02:19:26.476973] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:24:01.930 [2024-12-15 02:19:26.476983] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.930 [2024-12-15 02:19:26.476990] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:24:01.930 [2024-12-15 02:19:26.476996] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:24:01.930 [2024-12-15 02:19:26.477003] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.930 [2024-12-15 02:19:26.477009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:24:01.930 [2024-12-15 02:19:26.477021] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:24:01.930 [2024-12-15 02:19:26.477028] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:24:01.930 [2024-12-15 02:19:26.477038] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:24:01.930 [2024-12-15 02:19:26.477045] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:24:01.930 [2024-12-15 02:19:26.477055] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:01.930 [2024-12-15 02:19:26.477062] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:24:01.930 [2024-12-15 02:19:26.477068] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:24:01.930 [2024-12-15 02:19:26.477075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:24:01.930 [2024-12-15 02:19:26.477081] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:24:01.930 [2024-12-15 02:19:26.477087] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:24:01.930 [2024-12-15 02:19:26.477099] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.930 [2024-12-15 02:19:26.477105] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:24:01.930 [2024-12-15 02:19:26.477111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:24:01.930 [2024-12-15 02:19:26.477117] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.931 [2024-12-15 02:19:26.477124] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:24:01.931 [2024-12-15 02:19:26.477132] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:24:01.931 [2024-12-15 02:19:26.477139] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:24:01.931 [2024-12-15 02:19:26.477146] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:24:01.931 [2024-12-15 02:19:26.477154] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:24:01.931 [2024-12-15 02:19:26.477160] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:24:01.931 [2024-12-15 02:19:26.477167] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:24:01.931 [2024-12-15 02:19:26.477173] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:24:01.931 [2024-12-15 02:19:26.477179] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:24:01.931 [2024-12-15 02:19:26.477185] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:24:01.931 [2024-12-15 02:19:26.477218] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:24:01.931 [2024-12-15 02:19:26.477229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:01.931 [2024-12-15 02:19:26.477240] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:24:01.931 [2024-12-15 02:19:26.477247] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:24:01.931 [2024-12-15 02:19:26.477254] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:24:01.931 [2024-12-15 02:19:26.477260] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:24:01.931 [2024-12-15 02:19:26.477268] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:24:01.931 [2024-12-15 02:19:26.477274] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:24:01.931 [2024-12-15 02:19:26.477281] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:24:01.931 [2024-12-15 02:19:26.477288] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:24:01.931 [2024-12-15 02:19:26.477296] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:24:01.931 [2024-12-15 02:19:26.477303] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:24:01.931 [2024-12-15 02:19:26.477310] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:24:01.931 [2024-12-15 02:19:26.477317] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:24:01.931 [2024-12-15 02:19:26.477323] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:24:01.931 [2024-12-15 02:19:26.477330] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:24:01.931 [2024-12-15 02:19:26.477337] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:24:01.931 [2024-12-15 02:19:26.477345] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:24:01.931 [2024-12-15 02:19:26.477353] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:24:01.931 [2024-12-15 02:19:26.477360] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:24:01.931 [2024-12-15 02:19:26.477368] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:24:01.931 [2024-12-15 02:19:26.477374] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:24:01.931 [2024-12-15 02:19:26.477383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.477389] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:24:01.931 [2024-12-15 02:19:26.477398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.652 ms 00:24:01.931 [2024-12-15 02:19:26.477408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.503631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.503663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:24:01.931 [2024-12-15 02:19:26.503673] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.175 ms 00:24:01.931 [2024-12-15 02:19:26.503684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.503762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.503770] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:24:01.931 [2024-12-15 02:19:26.503778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.059 ms 00:24:01.931 [2024-12-15 02:19:26.503785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.547093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.547135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:24:01.931 [2024-12-15 02:19:26.547147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.258 ms 00:24:01.931 [2024-12-15 02:19:26.547155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.547205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.547215] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:24:01.931 [2024-12-15 02:19:26.547227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:24:01.931 [2024-12-15 02:19:26.547235] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.547663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.547693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:24:01.931 [2024-12-15 02:19:26.547702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.363 ms 00:24:01.931 [2024-12-15 02:19:26.547709] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.547846] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.547855] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:24:01.931 [2024-12-15 02:19:26.547867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.112 ms 00:24:01.931 [2024-12-15 02:19:26.547875] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.561502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.561537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:24:01.931 [2024-12-15 02:19:26.561547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.610 ms 00:24:01.931 [2024-12-15 02:19:26.561554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.574505] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 3, empty chunks = 1 00:24:01.931 [2024-12-15 02:19:26.574542] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:24:01.931 [2024-12-15 02:19:26.574554] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.574562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:24:01.931 [2024-12-15 02:19:26.574571] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.901 ms 00:24:01.931 [2024-12-15 02:19:26.574578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.599250] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.599289] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:24:01.931 [2024-12-15 02:19:26.599300] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.630 ms 00:24:01.931 [2024-12-15 02:19:26.599308] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.611450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.611488] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:24:01.931 [2024-12-15 02:19:26.611498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.097 ms 00:24:01.931 [2024-12-15 02:19:26.611505] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.623679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.623715] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:24:01.931 [2024-12-15 02:19:26.623725] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.135 ms 00:24:01.931 [2024-12-15 02:19:26.623732] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.624379] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.624408] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:24:01.931 [2024-12-15 02:19:26.624421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.550 ms 00:24:01.931 [2024-12-15 02:19:26.624428] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:01.931 [2024-12-15 02:19:26.688436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:01.931 [2024-12-15 02:19:26.688492] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:24:01.931 [2024-12-15 02:19:26.688514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 63.989 ms 00:24:01.932 [2024-12-15 02:19:26.688523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.193 [2024-12-15 02:19:26.699559] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:24:02.194 [2024-12-15 02:19:26.702541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.194 [2024-12-15 02:19:26.702583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:24:02.194 [2024-12-15 02:19:26.702596] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.959 ms 00:24:02.194 [2024-12-15 02:19:26.702605] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.194 [2024-12-15 02:19:26.702711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.194 [2024-12-15 02:19:26.702724] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:24:02.194 [2024-12-15 02:19:26.702734] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:24:02.194 [2024-12-15 02:19:26.702746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.194 [2024-12-15 02:19:26.703568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.194 [2024-12-15 02:19:26.703616] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:24:02.194 [2024-12-15 02:19:26.703628] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.782 ms 00:24:02.194 [2024-12-15 02:19:26.703638] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.194 [2024-12-15 02:19:26.703666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.194 [2024-12-15 02:19:26.703676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:24:02.194 [2024-12-15 02:19:26.703685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:24:02.194 [2024-12-15 02:19:26.703695] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.194 [2024-12-15 02:19:26.703738] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:24:02.194 [2024-12-15 02:19:26.703750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.194 [2024-12-15 02:19:26.703760] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:24:02.194 [2024-12-15 02:19:26.703770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:24:02.194 [2024-12-15 02:19:26.703779] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.194 [2024-12-15 02:19:26.729588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.194 [2024-12-15 02:19:26.729635] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:24:02.194 [2024-12-15 02:19:26.729664] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.788 ms 00:24:02.194 [2024-12-15 02:19:26.729673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.194 [2024-12-15 02:19:26.729757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:24:02.194 [2024-12-15 02:19:26.729769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:24:02.194 [2024-12-15 02:19:26.729778] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:24:02.194 [2024-12-15 02:19:26.729787] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:24:02.194 [2024-12-15 02:19:26.731290] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 277.908 ms, result 0 00:24:03.580  [2024-12-15T02:19:29.288Z] Copying: 1032/1048576 [kB] (1032 kBps) [2024-12-15T02:19:30.231Z] Copying: 9960/1048576 [kB] (8928 kBps) [2024-12-15T02:19:31.176Z] Copying: 20/1024 [MB] (10 MBps) [2024-12-15T02:19:32.121Z] Copying: 31/1024 [MB] (10 MBps) [2024-12-15T02:19:33.067Z] Copying: 47/1024 [MB] (16 MBps) [2024-12-15T02:19:34.012Z] Copying: 65/1024 [MB] (17 MBps) [2024-12-15T02:19:34.959Z] Copying: 75/1024 [MB] (10 MBps) [2024-12-15T02:19:36.348Z] Copying: 85/1024 [MB] (10 MBps) [2024-12-15T02:19:37.294Z] Copying: 112/1024 [MB] (26 MBps) [2024-12-15T02:19:38.239Z] Copying: 123/1024 [MB] (11 MBps) [2024-12-15T02:19:39.183Z] Copying: 134/1024 [MB] (10 MBps) [2024-12-15T02:19:40.126Z] Copying: 155/1024 [MB] (21 MBps) [2024-12-15T02:19:41.068Z] Copying: 174/1024 [MB] (18 MBps) [2024-12-15T02:19:42.013Z] Copying: 196/1024 [MB] (22 MBps) [2024-12-15T02:19:42.956Z] Copying: 216/1024 [MB] (19 MBps) [2024-12-15T02:19:44.344Z] Copying: 231/1024 [MB] (14 MBps) [2024-12-15T02:19:45.295Z] Copying: 241/1024 [MB] (10 MBps) [2024-12-15T02:19:46.238Z] Copying: 254/1024 [MB] (13 MBps) [2024-12-15T02:19:47.182Z] Copying: 287/1024 [MB] (32 MBps) [2024-12-15T02:19:48.127Z] Copying: 298/1024 [MB] (11 MBps) [2024-12-15T02:19:49.068Z] Copying: 320/1024 [MB] (21 MBps) [2024-12-15T02:19:50.013Z] Copying: 340/1024 [MB] (20 MBps) [2024-12-15T02:19:50.955Z] Copying: 362/1024 [MB] (21 MBps) [2024-12-15T02:19:51.984Z] Copying: 382/1024 [MB] (19 MBps) [2024-12-15T02:19:52.927Z] Copying: 392/1024 [MB] (10 MBps) [2024-12-15T02:19:54.315Z] Copying: 412/1024 [MB] (19 MBps) [2024-12-15T02:19:55.256Z] Copying: 436/1024 [MB] (23 MBps) [2024-12-15T02:19:56.201Z] Copying: 456/1024 [MB] (19 MBps) [2024-12-15T02:19:57.146Z] Copying: 479/1024 [MB] (23 MBps) [2024-12-15T02:19:58.090Z] Copying: 495/1024 [MB] (15 MBps) [2024-12-15T02:19:59.031Z] Copying: 514/1024 [MB] (19 MBps) [2024-12-15T02:19:59.975Z] Copying: 537/1024 [MB] (23 MBps) [2024-12-15T02:20:01.366Z] Copying: 566/1024 [MB] (28 MBps) [2024-12-15T02:20:01.940Z] Copying: 577/1024 [MB] (11 MBps) [2024-12-15T02:20:03.331Z] Copying: 595/1024 [MB] (18 MBps) [2024-12-15T02:20:04.279Z] Copying: 609/1024 [MB] (13 MBps) [2024-12-15T02:20:05.227Z] Copying: 620/1024 [MB] (10 MBps) [2024-12-15T02:20:06.173Z] Copying: 630/1024 [MB] (10 MBps) [2024-12-15T02:20:07.115Z] Copying: 641/1024 [MB] (10 MBps) [2024-12-15T02:20:08.057Z] Copying: 652/1024 [MB] (11 MBps) [2024-12-15T02:20:09.001Z] Copying: 663/1024 [MB] (11 MBps) [2024-12-15T02:20:09.945Z] Copying: 674/1024 [MB] (10 MBps) [2024-12-15T02:20:11.331Z] Copying: 685/1024 [MB] (10 MBps) [2024-12-15T02:20:12.278Z] Copying: 696/1024 [MB] (10 MBps) [2024-12-15T02:20:13.226Z] Copying: 707/1024 [MB] (11 MBps) [2024-12-15T02:20:14.172Z] Copying: 717/1024 [MB] (10 MBps) [2024-12-15T02:20:15.118Z] Copying: 729/1024 [MB] (11 MBps) [2024-12-15T02:20:16.065Z] Copying: 739/1024 [MB] (10 MBps) [2024-12-15T02:20:17.012Z] Copying: 750/1024 [MB] (10 MBps) [2024-12-15T02:20:18.005Z] Copying: 760/1024 [MB] (10 MBps) [2024-12-15T02:20:18.977Z] Copying: 771/1024 [MB] (11 MBps) [2024-12-15T02:20:20.368Z] Copying: 782/1024 [MB] (10 MBps) [2024-12-15T02:20:20.942Z] Copying: 792/1024 [MB] (10 MBps) [2024-12-15T02:20:22.335Z] Copying: 803/1024 [MB] (10 MBps) [2024-12-15T02:20:23.281Z] Copying: 813/1024 [MB] (10 MBps) [2024-12-15T02:20:24.226Z] Copying: 834/1024 [MB] (20 MBps) [2024-12-15T02:20:25.170Z] Copying: 856/1024 [MB] (22 MBps) [2024-12-15T02:20:26.111Z] Copying: 878/1024 [MB] (21 MBps) [2024-12-15T02:20:27.054Z] Copying: 892/1024 [MB] (14 MBps) [2024-12-15T02:20:28.000Z] Copying: 909/1024 [MB] (17 MBps) [2024-12-15T02:20:28.942Z] Copying: 933/1024 [MB] (23 MBps) [2024-12-15T02:20:30.330Z] Copying: 953/1024 [MB] (20 MBps) [2024-12-15T02:20:31.275Z] Copying: 971/1024 [MB] (17 MBps) [2024-12-15T02:20:32.219Z] Copying: 992/1024 [MB] (21 MBps) [2024-12-15T02:20:32.789Z] Copying: 1010/1024 [MB] (17 MBps) [2024-12-15T02:20:32.789Z] Copying: 1024/1024 [MB] (average 15 MBps)[2024-12-15 02:20:32.592986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.024 [2024-12-15 02:20:32.593075] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:25:08.024 [2024-12-15 02:20:32.593096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:25:08.024 [2024-12-15 02:20:32.593113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.024 [2024-12-15 02:20:32.593144] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:25:08.024 [2024-12-15 02:20:32.597312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.024 [2024-12-15 02:20:32.597363] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:25:08.024 [2024-12-15 02:20:32.597380] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.145 ms 00:25:08.024 [2024-12-15 02:20:32.597391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.024 [2024-12-15 02:20:32.597716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.024 [2024-12-15 02:20:32.597731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:25:08.024 [2024-12-15 02:20:32.597743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.293 ms 00:25:08.024 [2024-12-15 02:20:32.597762] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.024 [2024-12-15 02:20:32.611944] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.024 [2024-12-15 02:20:32.611993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:25:08.024 [2024-12-15 02:20:32.612006] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.160 ms 00:25:08.024 [2024-12-15 02:20:32.612015] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.024 [2024-12-15 02:20:32.618157] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.024 [2024-12-15 02:20:32.618206] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:25:08.024 [2024-12-15 02:20:32.618217] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.113 ms 00:25:08.024 [2024-12-15 02:20:32.618233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.024 [2024-12-15 02:20:32.644612] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.024 [2024-12-15 02:20:32.644658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:25:08.024 [2024-12-15 02:20:32.644670] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.327 ms 00:25:08.024 [2024-12-15 02:20:32.644677] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.024 [2024-12-15 02:20:32.660771] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.024 [2024-12-15 02:20:32.660811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:25:08.024 [2024-12-15 02:20:32.660823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.049 ms 00:25:08.024 [2024-12-15 02:20:32.660831] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.287 [2024-12-15 02:20:32.847079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.287 [2024-12-15 02:20:32.847144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:25:08.287 [2024-12-15 02:20:32.847156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 186.197 ms 00:25:08.287 [2024-12-15 02:20:32.847165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.287 [2024-12-15 02:20:32.872696] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.287 [2024-12-15 02:20:32.872736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:25:08.287 [2024-12-15 02:20:32.872747] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.514 ms 00:25:08.287 [2024-12-15 02:20:32.872755] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.287 [2024-12-15 02:20:32.897330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.287 [2024-12-15 02:20:32.897374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:25:08.287 [2024-12-15 02:20:32.897386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.532 ms 00:25:08.287 [2024-12-15 02:20:32.897395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.287 [2024-12-15 02:20:32.922079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.287 [2024-12-15 02:20:32.922122] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:25:08.287 [2024-12-15 02:20:32.922133] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.640 ms 00:25:08.287 [2024-12-15 02:20:32.922141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.287 [2024-12-15 02:20:32.946401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.287 [2024-12-15 02:20:32.946442] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:25:08.287 [2024-12-15 02:20:32.946452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.173 ms 00:25:08.287 [2024-12-15 02:20:32.946460] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.287 [2024-12-15 02:20:32.946501] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:25:08.287 [2024-12-15 02:20:32.946517] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131840 / 261120 wr_cnt: 1 state: open 00:25:08.287 [2024-12-15 02:20:32.946529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946546] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946562] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946595] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946630] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946637] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946666] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946674] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946690] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946706] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946721] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946729] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946736] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946752] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946768] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946775] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946807] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946815] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:25:08.287 [2024-12-15 02:20:32.946829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946836] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946844] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946851] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946866] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946873] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946881] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946889] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946912] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946926] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946934] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946948] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946956] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946963] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946978] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946986] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.946994] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947032] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947047] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947063] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947071] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947135] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947158] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947188] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947209] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947218] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947250] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947258] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947291] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947299] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947308] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:25:08.288 [2024-12-15 02:20:32.947324] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:25:08.288 [2024-12-15 02:20:32.947333] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: b824f819-dc72-458f-a122-460abb6a208d 00:25:08.288 [2024-12-15 02:20:32.947343] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131840 00:25:08.288 [2024-12-15 02:20:32.947352] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 132288 00:25:08.288 [2024-12-15 02:20:32.947360] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 131328 00:25:08.288 [2024-12-15 02:20:32.947369] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0073 00:25:08.288 [2024-12-15 02:20:32.947383] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:25:08.288 [2024-12-15 02:20:32.947399] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:25:08.288 [2024-12-15 02:20:32.947408] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:25:08.288 [2024-12-15 02:20:32.947415] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:25:08.288 [2024-12-15 02:20:32.947421] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:25:08.288 [2024-12-15 02:20:32.947429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.288 [2024-12-15 02:20:32.947438] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:25:08.288 [2024-12-15 02:20:32.947447] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.929 ms 00:25:08.288 [2024-12-15 02:20:32.947455] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.288 [2024-12-15 02:20:32.960637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.289 [2024-12-15 02:20:32.960675] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:25:08.289 [2024-12-15 02:20:32.960693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.164 ms 00:25:08.289 [2024-12-15 02:20:32.960702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.289 [2024-12-15 02:20:32.961102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:08.289 [2024-12-15 02:20:32.961125] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:25:08.289 [2024-12-15 02:20:32.961135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:25:08.289 [2024-12-15 02:20:32.961144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.289 [2024-12-15 02:20:32.997262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.289 [2024-12-15 02:20:32.997310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:08.289 [2024-12-15 02:20:32.997322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.289 [2024-12-15 02:20:32.997332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.289 [2024-12-15 02:20:32.997417] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.289 [2024-12-15 02:20:32.997428] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:08.289 [2024-12-15 02:20:32.997437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.289 [2024-12-15 02:20:32.997447] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.289 [2024-12-15 02:20:32.997511] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.289 [2024-12-15 02:20:32.997523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:08.289 [2024-12-15 02:20:32.997538] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.289 [2024-12-15 02:20:32.997547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.289 [2024-12-15 02:20:32.997581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.289 [2024-12-15 02:20:32.997591] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:08.289 [2024-12-15 02:20:32.997600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.289 [2024-12-15 02:20:32.997608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-12-15 02:20:33.082243] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.550 [2024-12-15 02:20:33.082298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:08.550 [2024-12-15 02:20:33.082311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.550 [2024-12-15 02:20:33.082320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-12-15 02:20:33.151371] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.550 [2024-12-15 02:20:33.151421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:08.550 [2024-12-15 02:20:33.151433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.550 [2024-12-15 02:20:33.151443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-12-15 02:20:33.151510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.550 [2024-12-15 02:20:33.151522] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:08.550 [2024-12-15 02:20:33.151531] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.550 [2024-12-15 02:20:33.151545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-12-15 02:20:33.151600] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.550 [2024-12-15 02:20:33.151611] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:08.550 [2024-12-15 02:20:33.151619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.550 [2024-12-15 02:20:33.151628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-12-15 02:20:33.151725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.550 [2024-12-15 02:20:33.151737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:08.550 [2024-12-15 02:20:33.151746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.550 [2024-12-15 02:20:33.151754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-12-15 02:20:33.151790] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.550 [2024-12-15 02:20:33.151799] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:25:08.550 [2024-12-15 02:20:33.151807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.550 [2024-12-15 02:20:33.151815] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-12-15 02:20:33.151853] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.550 [2024-12-15 02:20:33.151863] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:08.550 [2024-12-15 02:20:33.151871] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.550 [2024-12-15 02:20:33.151880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.550 [2024-12-15 02:20:33.151929] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:25:08.551 [2024-12-15 02:20:33.151939] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:08.551 [2024-12-15 02:20:33.151948] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:25:08.551 [2024-12-15 02:20:33.151956] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:08.551 [2024-12-15 02:20:33.152090] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 559.083 ms, result 0 00:25:09.493 00:25:09.493 00:25:09.493 02:20:33 ftl.ftl_restore -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:11.409 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:25:11.409 02:20:36 ftl.ftl_restore -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:25:11.409 02:20:36 ftl.ftl_restore -- ftl/restore.sh@85 -- # restore_kill 00:25:11.409 02:20:36 ftl.ftl_restore -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/restore.sh@32 -- # killprocess 79033 00:25:11.672 02:20:36 ftl.ftl_restore -- common/autotest_common.sh@954 -- # '[' -z 79033 ']' 00:25:11.672 02:20:36 ftl.ftl_restore -- common/autotest_common.sh@958 -- # kill -0 79033 00:25:11.672 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (79033) - No such process 00:25:11.672 Process with pid 79033 is not found 00:25:11.672 02:20:36 ftl.ftl_restore -- common/autotest_common.sh@981 -- # echo 'Process with pid 79033 is not found' 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/restore.sh@33 -- # remove_shm 00:25:11.672 Remove shared memory files 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/common.sh@204 -- # echo Remove shared memory files 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/common.sh@205 -- # rm -f rm -f 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/common.sh@206 -- # rm -f rm -f 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/common.sh@207 -- # rm -f rm -f 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:25:11.672 02:20:36 ftl.ftl_restore -- ftl/common.sh@209 -- # rm -f rm -f 00:25:11.672 00:25:11.672 real 4m35.626s 00:25:11.672 user 4m23.633s 00:25:11.672 sys 0m11.900s 00:25:11.672 02:20:36 ftl.ftl_restore -- common/autotest_common.sh@1130 -- # xtrace_disable 00:25:11.672 ************************************ 00:25:11.672 END TEST ftl_restore 00:25:11.672 02:20:36 ftl.ftl_restore -- common/autotest_common.sh@10 -- # set +x 00:25:11.672 ************************************ 00:25:11.672 02:20:36 ftl -- ftl/ftl.sh@77 -- # run_test ftl_dirty_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:11.672 02:20:36 ftl -- common/autotest_common.sh@1105 -- # '[' 5 -le 1 ']' 00:25:11.672 02:20:36 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:25:11.672 02:20:36 ftl -- common/autotest_common.sh@10 -- # set +x 00:25:11.672 ************************************ 00:25:11.672 START TEST ftl_dirty_shutdown 00:25:11.672 ************************************ 00:25:11.672 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh -c 0000:00:10.0 0000:00:11.0 00:25:11.934 * Looking for test storage... 00:25:11.934 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@345 -- # : 1 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # decimal 1 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=1 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 1 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # decimal 2 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@353 -- # local d=2 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@355 -- # echo 2 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- scripts/common.sh@368 -- # return 0 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:25:11.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.934 --rc genhtml_branch_coverage=1 00:25:11.934 --rc genhtml_function_coverage=1 00:25:11.934 --rc genhtml_legend=1 00:25:11.934 --rc geninfo_all_blocks=1 00:25:11.934 --rc geninfo_unexecuted_blocks=1 00:25:11.934 00:25:11.934 ' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:25:11.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.934 --rc genhtml_branch_coverage=1 00:25:11.934 --rc genhtml_function_coverage=1 00:25:11.934 --rc genhtml_legend=1 00:25:11.934 --rc geninfo_all_blocks=1 00:25:11.934 --rc geninfo_unexecuted_blocks=1 00:25:11.934 00:25:11.934 ' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:25:11.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.934 --rc genhtml_branch_coverage=1 00:25:11.934 --rc genhtml_function_coverage=1 00:25:11.934 --rc genhtml_legend=1 00:25:11.934 --rc geninfo_all_blocks=1 00:25:11.934 --rc geninfo_unexecuted_blocks=1 00:25:11.934 00:25:11.934 ' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:25:11.934 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:25:11.934 --rc genhtml_branch_coverage=1 00:25:11.934 --rc genhtml_function_coverage=1 00:25:11.934 --rc genhtml_legend=1 00:25:11.934 --rc geninfo_all_blocks=1 00:25:11.934 --rc geninfo_unexecuted_blocks=1 00:25:11.934 00:25:11.934 ' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@12 -- # spdk_dd=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@15 -- # case $opt in 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@17 -- # nv_cache=0000:00:10.0 00:25:11.934 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@14 -- # getopts :u:c: opt 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@21 -- # shift 2 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@23 -- # device=0000:00:11.0 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@24 -- # timeout=240 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@26 -- # block_size=4096 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@27 -- # chunk_size=262144 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@28 -- # data_size=262144 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@42 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@45 -- # svcpid=81939 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@47 -- # waitforlisten 81939 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@835 -- # '[' -z 81939 ']' 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:25:11.935 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@44 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt -m 0x1 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:25:11.935 02:20:36 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:25:11.935 [2024-12-15 02:20:36.630265] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:11.935 [2024-12-15 02:20:36.630405] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid81939 ] 00:25:12.197 [2024-12-15 02:20:36.794079] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:12.197 [2024-12-15 02:20:36.910274] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:25:13.141 02:20:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:25:13.141 02:20:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@868 -- # return 0 00:25:13.141 02:20:37 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:25:13.141 02:20:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@54 -- # local name=nvme0 00:25:13.141 02:20:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:25:13.141 02:20:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@56 -- # local size=103424 00:25:13.141 02:20:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:25:13.141 02:20:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:25:13.402 02:20:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:25:13.402 02:20:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@62 -- # local base_size 00:25:13.402 02:20:37 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:25:13.402 02:20:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:25:13.402 02:20:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:13.402 02:20:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:13.402 02:20:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:13.402 02:20:37 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:25:13.402 02:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:13.402 { 00:25:13.402 "name": "nvme0n1", 00:25:13.402 "aliases": [ 00:25:13.402 "bee65857-4434-4028-9c7c-aed80bfaac69" 00:25:13.402 ], 00:25:13.402 "product_name": "NVMe disk", 00:25:13.402 "block_size": 4096, 00:25:13.402 "num_blocks": 1310720, 00:25:13.402 "uuid": "bee65857-4434-4028-9c7c-aed80bfaac69", 00:25:13.402 "numa_id": -1, 00:25:13.402 "assigned_rate_limits": { 00:25:13.402 "rw_ios_per_sec": 0, 00:25:13.402 "rw_mbytes_per_sec": 0, 00:25:13.402 "r_mbytes_per_sec": 0, 00:25:13.402 "w_mbytes_per_sec": 0 00:25:13.402 }, 00:25:13.402 "claimed": true, 00:25:13.402 "claim_type": "read_many_write_one", 00:25:13.402 "zoned": false, 00:25:13.402 "supported_io_types": { 00:25:13.402 "read": true, 00:25:13.402 "write": true, 00:25:13.403 "unmap": true, 00:25:13.403 "flush": true, 00:25:13.403 "reset": true, 00:25:13.403 "nvme_admin": true, 00:25:13.403 "nvme_io": true, 00:25:13.403 "nvme_io_md": false, 00:25:13.403 "write_zeroes": true, 00:25:13.403 "zcopy": false, 00:25:13.403 "get_zone_info": false, 00:25:13.403 "zone_management": false, 00:25:13.403 "zone_append": false, 00:25:13.403 "compare": true, 00:25:13.403 "compare_and_write": false, 00:25:13.403 "abort": true, 00:25:13.403 "seek_hole": false, 00:25:13.403 "seek_data": false, 00:25:13.403 "copy": true, 00:25:13.403 "nvme_iov_md": false 00:25:13.403 }, 00:25:13.403 "driver_specific": { 00:25:13.403 "nvme": [ 00:25:13.403 { 00:25:13.403 "pci_address": "0000:00:11.0", 00:25:13.403 "trid": { 00:25:13.403 "trtype": "PCIe", 00:25:13.403 "traddr": "0000:00:11.0" 00:25:13.403 }, 00:25:13.403 "ctrlr_data": { 00:25:13.403 "cntlid": 0, 00:25:13.403 "vendor_id": "0x1b36", 00:25:13.403 "model_number": "QEMU NVMe Ctrl", 00:25:13.403 "serial_number": "12341", 00:25:13.403 "firmware_revision": "8.0.0", 00:25:13.403 "subnqn": "nqn.2019-08.org.qemu:12341", 00:25:13.403 "oacs": { 00:25:13.403 "security": 0, 00:25:13.403 "format": 1, 00:25:13.403 "firmware": 0, 00:25:13.403 "ns_manage": 1 00:25:13.403 }, 00:25:13.403 "multi_ctrlr": false, 00:25:13.403 "ana_reporting": false 00:25:13.403 }, 00:25:13.403 "vs": { 00:25:13.403 "nvme_version": "1.4" 00:25:13.403 }, 00:25:13.403 "ns_data": { 00:25:13.403 "id": 1, 00:25:13.403 "can_share": false 00:25:13.403 } 00:25:13.403 } 00:25:13.403 ], 00:25:13.403 "mp_policy": "active_passive" 00:25:13.403 } 00:25:13.403 } 00:25:13.403 ]' 00:25:13.403 02:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:13.403 02:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:13.403 02:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@28 -- # stores=756fbd24-a8fa-4c9e-9f77-40552f7ca0f4 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:25:13.663 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 756fbd24-a8fa-4c9e-9f77-40552f7ca0f4 00:25:13.924 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:25:14.184 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@68 -- # lvs=f4c8214f-0e78-43ad-9028-4c88d99cdace 00:25:14.184 02:20:38 ftl.ftl_dirty_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u f4c8214f-0e78-43ad-9028-4c88d99cdace 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@49 -- # split_bdev=11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@51 -- # '[' -n 0000:00:10.0 ']' 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # create_nv_cache_bdev nvc0 0000:00:10.0 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@35 -- # local name=nvc0 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@37 -- # local base_bdev=11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@38 -- # local cache_size= 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # get_bdev_size 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:14.445 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:14.706 { 00:25:14.706 "name": "11e2a9c4-999e-4b5b-8bf5-b572a8d01e55", 00:25:14.706 "aliases": [ 00:25:14.706 "lvs/nvme0n1p0" 00:25:14.706 ], 00:25:14.706 "product_name": "Logical Volume", 00:25:14.706 "block_size": 4096, 00:25:14.706 "num_blocks": 26476544, 00:25:14.706 "uuid": "11e2a9c4-999e-4b5b-8bf5-b572a8d01e55", 00:25:14.706 "assigned_rate_limits": { 00:25:14.706 "rw_ios_per_sec": 0, 00:25:14.706 "rw_mbytes_per_sec": 0, 00:25:14.706 "r_mbytes_per_sec": 0, 00:25:14.706 "w_mbytes_per_sec": 0 00:25:14.706 }, 00:25:14.706 "claimed": false, 00:25:14.706 "zoned": false, 00:25:14.706 "supported_io_types": { 00:25:14.706 "read": true, 00:25:14.706 "write": true, 00:25:14.706 "unmap": true, 00:25:14.706 "flush": false, 00:25:14.706 "reset": true, 00:25:14.706 "nvme_admin": false, 00:25:14.706 "nvme_io": false, 00:25:14.706 "nvme_io_md": false, 00:25:14.706 "write_zeroes": true, 00:25:14.706 "zcopy": false, 00:25:14.706 "get_zone_info": false, 00:25:14.706 "zone_management": false, 00:25:14.706 "zone_append": false, 00:25:14.706 "compare": false, 00:25:14.706 "compare_and_write": false, 00:25:14.706 "abort": false, 00:25:14.706 "seek_hole": true, 00:25:14.706 "seek_data": true, 00:25:14.706 "copy": false, 00:25:14.706 "nvme_iov_md": false 00:25:14.706 }, 00:25:14.706 "driver_specific": { 00:25:14.706 "lvol": { 00:25:14.706 "lvol_store_uuid": "f4c8214f-0e78-43ad-9028-4c88d99cdace", 00:25:14.706 "base_bdev": "nvme0n1", 00:25:14.706 "thin_provision": true, 00:25:14.706 "num_allocated_clusters": 0, 00:25:14.706 "snapshot": false, 00:25:14.706 "clone": false, 00:25:14.706 "esnap_clone": false 00:25:14.706 } 00:25:14.706 } 00:25:14.706 } 00:25:14.706 ]' 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@41 -- # local base_size=5171 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:25:14.706 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:25:14.965 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:25:14.965 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@47 -- # [[ -z '' ]] 00:25:14.965 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # get_bdev_size 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:14.965 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:14.965 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:14.965 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:14.965 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:14.965 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:15.224 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:15.224 { 00:25:15.224 "name": "11e2a9c4-999e-4b5b-8bf5-b572a8d01e55", 00:25:15.224 "aliases": [ 00:25:15.224 "lvs/nvme0n1p0" 00:25:15.224 ], 00:25:15.224 "product_name": "Logical Volume", 00:25:15.224 "block_size": 4096, 00:25:15.224 "num_blocks": 26476544, 00:25:15.224 "uuid": "11e2a9c4-999e-4b5b-8bf5-b572a8d01e55", 00:25:15.224 "assigned_rate_limits": { 00:25:15.224 "rw_ios_per_sec": 0, 00:25:15.224 "rw_mbytes_per_sec": 0, 00:25:15.224 "r_mbytes_per_sec": 0, 00:25:15.224 "w_mbytes_per_sec": 0 00:25:15.224 }, 00:25:15.224 "claimed": false, 00:25:15.224 "zoned": false, 00:25:15.224 "supported_io_types": { 00:25:15.224 "read": true, 00:25:15.224 "write": true, 00:25:15.224 "unmap": true, 00:25:15.224 "flush": false, 00:25:15.224 "reset": true, 00:25:15.224 "nvme_admin": false, 00:25:15.224 "nvme_io": false, 00:25:15.224 "nvme_io_md": false, 00:25:15.224 "write_zeroes": true, 00:25:15.224 "zcopy": false, 00:25:15.224 "get_zone_info": false, 00:25:15.224 "zone_management": false, 00:25:15.224 "zone_append": false, 00:25:15.224 "compare": false, 00:25:15.224 "compare_and_write": false, 00:25:15.224 "abort": false, 00:25:15.224 "seek_hole": true, 00:25:15.224 "seek_data": true, 00:25:15.224 "copy": false, 00:25:15.224 "nvme_iov_md": false 00:25:15.224 }, 00:25:15.224 "driver_specific": { 00:25:15.224 "lvol": { 00:25:15.224 "lvol_store_uuid": "f4c8214f-0e78-43ad-9028-4c88d99cdace", 00:25:15.224 "base_bdev": "nvme0n1", 00:25:15.224 "thin_provision": true, 00:25:15.224 "num_allocated_clusters": 0, 00:25:15.224 "snapshot": false, 00:25:15.224 "clone": false, 00:25:15.224 "esnap_clone": false 00:25:15.224 } 00:25:15.224 } 00:25:15.224 } 00:25:15.224 ]' 00:25:15.224 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:15.224 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:15.224 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:15.224 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:15.224 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:15.225 02:20:39 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:15.225 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@48 -- # cache_size=5171 00:25:15.225 02:20:39 ftl.ftl_dirty_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:25:15.483 02:20:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@52 -- # nvc_bdev=nvc0n1p0 00:25:15.483 02:20:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # get_bdev_size 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:15.483 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:15.483 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:25:15.483 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:25:15.483 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:25:15.483 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:25:15.742 { 00:25:15.742 "name": "11e2a9c4-999e-4b5b-8bf5-b572a8d01e55", 00:25:15.742 "aliases": [ 00:25:15.742 "lvs/nvme0n1p0" 00:25:15.742 ], 00:25:15.742 "product_name": "Logical Volume", 00:25:15.742 "block_size": 4096, 00:25:15.742 "num_blocks": 26476544, 00:25:15.742 "uuid": "11e2a9c4-999e-4b5b-8bf5-b572a8d01e55", 00:25:15.742 "assigned_rate_limits": { 00:25:15.742 "rw_ios_per_sec": 0, 00:25:15.742 "rw_mbytes_per_sec": 0, 00:25:15.742 "r_mbytes_per_sec": 0, 00:25:15.742 "w_mbytes_per_sec": 0 00:25:15.742 }, 00:25:15.742 "claimed": false, 00:25:15.742 "zoned": false, 00:25:15.742 "supported_io_types": { 00:25:15.742 "read": true, 00:25:15.742 "write": true, 00:25:15.742 "unmap": true, 00:25:15.742 "flush": false, 00:25:15.742 "reset": true, 00:25:15.742 "nvme_admin": false, 00:25:15.742 "nvme_io": false, 00:25:15.742 "nvme_io_md": false, 00:25:15.742 "write_zeroes": true, 00:25:15.742 "zcopy": false, 00:25:15.742 "get_zone_info": false, 00:25:15.742 "zone_management": false, 00:25:15.742 "zone_append": false, 00:25:15.742 "compare": false, 00:25:15.742 "compare_and_write": false, 00:25:15.742 "abort": false, 00:25:15.742 "seek_hole": true, 00:25:15.742 "seek_data": true, 00:25:15.742 "copy": false, 00:25:15.742 "nvme_iov_md": false 00:25:15.742 }, 00:25:15.742 "driver_specific": { 00:25:15.742 "lvol": { 00:25:15.742 "lvol_store_uuid": "f4c8214f-0e78-43ad-9028-4c88d99cdace", 00:25:15.742 "base_bdev": "nvme0n1", 00:25:15.742 "thin_provision": true, 00:25:15.742 "num_allocated_clusters": 0, 00:25:15.742 "snapshot": false, 00:25:15.742 "clone": false, 00:25:15.742 "esnap_clone": false 00:25:15.742 } 00:25:15.742 } 00:25:15.742 } 00:25:15.742 ]' 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1388 -- # nb=26476544 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1392 -- # echo 103424 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@55 -- # l2p_dram_size_mb=10 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@56 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 --l2p_dram_limit 10' 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@58 -- # '[' -n '' ']' 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # '[' -n 0000:00:10.0 ']' 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@59 -- # ftl_construct_args+=' -c nvc0n1p0' 00:25:15.742 02:20:40 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@61 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d 11e2a9c4-999e-4b5b-8bf5-b572a8d01e55 --l2p_dram_limit 10 -c nvc0n1p0 00:25:16.003 [2024-12-15 02:20:40.597498] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.003 [2024-12-15 02:20:40.597531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:25:16.003 [2024-12-15 02:20:40.597543] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:16.003 [2024-12-15 02:20:40.597550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.003 [2024-12-15 02:20:40.597595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.003 [2024-12-15 02:20:40.597603] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:25:16.003 [2024-12-15 02:20:40.597611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:25:16.003 [2024-12-15 02:20:40.597616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.003 [2024-12-15 02:20:40.597636] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:25:16.003 [2024-12-15 02:20:40.598183] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:25:16.003 [2024-12-15 02:20:40.598217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.003 [2024-12-15 02:20:40.598224] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:25:16.003 [2024-12-15 02:20:40.598232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.586 ms 00:25:16.003 [2024-12-15 02:20:40.598238] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.003 [2024-12-15 02:20:40.598300] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 0c183996-2205-4a0b-bdc0-38705690ad6f 00:25:16.003 [2024-12-15 02:20:40.599220] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.003 [2024-12-15 02:20:40.599249] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:25:16.003 [2024-12-15 02:20:40.599256] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:25:16.003 [2024-12-15 02:20:40.599263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.003 [2024-12-15 02:20:40.603952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.003 [2024-12-15 02:20:40.603981] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:25:16.003 [2024-12-15 02:20:40.603989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.656 ms 00:25:16.003 [2024-12-15 02:20:40.603998] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.003 [2024-12-15 02:20:40.604061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.003 [2024-12-15 02:20:40.604069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:25:16.003 [2024-12-15 02:20:40.604076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.049 ms 00:25:16.003 [2024-12-15 02:20:40.604085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.003 [2024-12-15 02:20:40.604124] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.003 [2024-12-15 02:20:40.604133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:25:16.003 [2024-12-15 02:20:40.604139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:25:16.003 [2024-12-15 02:20:40.604148] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.003 [2024-12-15 02:20:40.604164] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:25:16.003 [2024-12-15 02:20:40.607013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.003 [2024-12-15 02:20:40.607037] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:25:16.003 [2024-12-15 02:20:40.607046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.851 ms 00:25:16.003 [2024-12-15 02:20:40.607052] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.003 [2024-12-15 02:20:40.607080] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.004 [2024-12-15 02:20:40.607086] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:25:16.004 [2024-12-15 02:20:40.607094] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:25:16.004 [2024-12-15 02:20:40.607100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.004 [2024-12-15 02:20:40.607119] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:25:16.004 [2024-12-15 02:20:40.607236] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:25:16.004 [2024-12-15 02:20:40.607252] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:25:16.004 [2024-12-15 02:20:40.607261] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:25:16.004 [2024-12-15 02:20:40.607271] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607277] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607284] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:25:16.004 [2024-12-15 02:20:40.607290] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:25:16.004 [2024-12-15 02:20:40.607300] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:25:16.004 [2024-12-15 02:20:40.607305] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:25:16.004 [2024-12-15 02:20:40.607312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.004 [2024-12-15 02:20:40.607323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:25:16.004 [2024-12-15 02:20:40.607331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.195 ms 00:25:16.004 [2024-12-15 02:20:40.607337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.004 [2024-12-15 02:20:40.607404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.004 [2024-12-15 02:20:40.607416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:25:16.004 [2024-12-15 02:20:40.607423] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.054 ms 00:25:16.004 [2024-12-15 02:20:40.607429] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.004 [2024-12-15 02:20:40.607505] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:25:16.004 [2024-12-15 02:20:40.607513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:25:16.004 [2024-12-15 02:20:40.607520] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:25:16.004 [2024-12-15 02:20:40.607538] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607544] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607550] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:25:16.004 [2024-12-15 02:20:40.607556] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607561] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.004 [2024-12-15 02:20:40.607568] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:25:16.004 [2024-12-15 02:20:40.607573] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:25:16.004 [2024-12-15 02:20:40.607581] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:25:16.004 [2024-12-15 02:20:40.607586] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:25:16.004 [2024-12-15 02:20:40.607592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:25:16.004 [2024-12-15 02:20:40.607597] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607605] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:25:16.004 [2024-12-15 02:20:40.607610] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607616] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:25:16.004 [2024-12-15 02:20:40.607627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607632] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:25:16.004 [2024-12-15 02:20:40.607643] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607655] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:25:16.004 [2024-12-15 02:20:40.607661] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607666] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:25:16.004 [2024-12-15 02:20:40.607677] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:25:16.004 [2024-12-15 02:20:40.607695] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607700] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.004 [2024-12-15 02:20:40.607706] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:25:16.004 [2024-12-15 02:20:40.607711] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:25:16.004 [2024-12-15 02:20:40.607719] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:25:16.004 [2024-12-15 02:20:40.607724] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:25:16.004 [2024-12-15 02:20:40.607731] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:25:16.004 [2024-12-15 02:20:40.607736] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607742] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:25:16.004 [2024-12-15 02:20:40.607747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:25:16.004 [2024-12-15 02:20:40.607753] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607758] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:25:16.004 [2024-12-15 02:20:40.607765] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:25:16.004 [2024-12-15 02:20:40.607770] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607777] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:25:16.004 [2024-12-15 02:20:40.607783] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:25:16.004 [2024-12-15 02:20:40.607790] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:25:16.004 [2024-12-15 02:20:40.607796] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:25:16.004 [2024-12-15 02:20:40.607802] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:25:16.004 [2024-12-15 02:20:40.607808] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:25:16.004 [2024-12-15 02:20:40.607815] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:25:16.004 [2024-12-15 02:20:40.607821] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:25:16.004 [2024-12-15 02:20:40.607829] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.004 [2024-12-15 02:20:40.607837] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:25:16.004 [2024-12-15 02:20:40.607844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:25:16.004 [2024-12-15 02:20:40.607849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:25:16.004 [2024-12-15 02:20:40.607856] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:25:16.004 [2024-12-15 02:20:40.607861] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:25:16.004 [2024-12-15 02:20:40.607868] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:25:16.004 [2024-12-15 02:20:40.607873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:25:16.004 [2024-12-15 02:20:40.607879] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:25:16.004 [2024-12-15 02:20:40.607885] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:25:16.004 [2024-12-15 02:20:40.607893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:25:16.004 [2024-12-15 02:20:40.607899] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:25:16.004 [2024-12-15 02:20:40.607905] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:25:16.004 [2024-12-15 02:20:40.607910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:25:16.005 [2024-12-15 02:20:40.607918] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:25:16.005 [2024-12-15 02:20:40.607923] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:25:16.005 [2024-12-15 02:20:40.607931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:25:16.005 [2024-12-15 02:20:40.607937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:25:16.005 [2024-12-15 02:20:40.607945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:25:16.005 [2024-12-15 02:20:40.607950] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:25:16.005 [2024-12-15 02:20:40.607957] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:25:16.005 [2024-12-15 02:20:40.607963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:16.005 [2024-12-15 02:20:40.607970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:25:16.005 [2024-12-15 02:20:40.607975] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.511 ms 00:25:16.005 [2024-12-15 02:20:40.607982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:16.005 [2024-12-15 02:20:40.608009] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:25:16.005 [2024-12-15 02:20:40.608020] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:25:20.280 [2024-12-15 02:20:44.438963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.280 [2024-12-15 02:20:44.439070] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:25:20.280 [2024-12-15 02:20:44.439088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3830.937 ms 00:25:20.280 [2024-12-15 02:20:44.439100] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.280 [2024-12-15 02:20:44.471517] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.280 [2024-12-15 02:20:44.471589] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:25:20.280 [2024-12-15 02:20:44.471603] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.137 ms 00:25:20.280 [2024-12-15 02:20:44.471615] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.280 [2024-12-15 02:20:44.471765] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.280 [2024-12-15 02:20:44.471780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:25:20.280 [2024-12-15 02:20:44.471790] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.072 ms 00:25:20.280 [2024-12-15 02:20:44.471807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.280 [2024-12-15 02:20:44.507462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.280 [2024-12-15 02:20:44.507523] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:25:20.280 [2024-12-15 02:20:44.507536] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.602 ms 00:25:20.280 [2024-12-15 02:20:44.507547] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.280 [2024-12-15 02:20:44.507585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.280 [2024-12-15 02:20:44.507601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:25:20.280 [2024-12-15 02:20:44.507611] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:25:20.280 [2024-12-15 02:20:44.507630] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.280 [2024-12-15 02:20:44.508255] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.280 [2024-12-15 02:20:44.508295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:25:20.280 [2024-12-15 02:20:44.508306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.569 ms 00:25:20.280 [2024-12-15 02:20:44.508316] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.280 [2024-12-15 02:20:44.508432] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.280 [2024-12-15 02:20:44.508444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:25:20.280 [2024-12-15 02:20:44.508457] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.091 ms 00:25:20.280 [2024-12-15 02:20:44.508472] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.526428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.526484] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:25:20.281 [2024-12-15 02:20:44.526497] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.937 ms 00:25:20.281 [2024-12-15 02:20:44.526508] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.547704] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:25:20.281 [2024-12-15 02:20:44.551643] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.551692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:25:20.281 [2024-12-15 02:20:44.551708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.035 ms 00:25:20.281 [2024-12-15 02:20:44.551717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.652337] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.652402] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:25:20.281 [2024-12-15 02:20:44.652420] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 100.565 ms 00:25:20.281 [2024-12-15 02:20:44.652430] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.652653] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.652670] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:25:20.281 [2024-12-15 02:20:44.652685] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.162 ms 00:25:20.281 [2024-12-15 02:20:44.652693] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.679649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.679704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:25:20.281 [2024-12-15 02:20:44.679721] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.893 ms 00:25:20.281 [2024-12-15 02:20:44.679730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.705880] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.705930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:25:20.281 [2024-12-15 02:20:44.705946] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.083 ms 00:25:20.281 [2024-12-15 02:20:44.705953] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.706606] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.706637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:25:20.281 [2024-12-15 02:20:44.706649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.599 ms 00:25:20.281 [2024-12-15 02:20:44.706659] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.793541] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.793597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:25:20.281 [2024-12-15 02:20:44.793618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 86.813 ms 00:25:20.281 [2024-12-15 02:20:44.793627] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.822098] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.822154] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:25:20.281 [2024-12-15 02:20:44.822171] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.348 ms 00:25:20.281 [2024-12-15 02:20:44.822180] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.848956] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.849009] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:25:20.281 [2024-12-15 02:20:44.849025] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.702 ms 00:25:20.281 [2024-12-15 02:20:44.849033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.876364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.876419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:25:20.281 [2024-12-15 02:20:44.876434] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.269 ms 00:25:20.281 [2024-12-15 02:20:44.876442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.876502] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.876511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:25:20.281 [2024-12-15 02:20:44.876527] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:25:20.281 [2024-12-15 02:20:44.876535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.876651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:25:20.281 [2024-12-15 02:20:44.876666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:25:20.281 [2024-12-15 02:20:44.876678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:25:20.281 [2024-12-15 02:20:44.876686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:25:20.281 [2024-12-15 02:20:44.878023] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4280.002 ms, result 0 00:25:20.281 { 00:25:20.281 "name": "ftl0", 00:25:20.281 "uuid": "0c183996-2205-4a0b-bdc0-38705690ad6f" 00:25:20.281 } 00:25:20.281 02:20:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@64 -- # echo '{"subsystems": [' 00:25:20.281 02:20:44 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:25:20.541 02:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@66 -- # echo ']}' 00:25:20.541 02:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@70 -- # modprobe nbd 00:25:20.541 02:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_start_disk ftl0 /dev/nbd0 00:25:20.802 /dev/nbd0 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@72 -- # waitfornbd nbd0 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@872 -- # local nbd_name=nbd0 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@873 -- # local i 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i = 1 )) 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@875 -- # (( i <= 20 )) 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@876 -- # grep -q -w nbd0 /proc/partitions 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@877 -- # break 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i = 1 )) 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@888 -- # (( i <= 20 )) 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@889 -- # dd if=/dev/nbd0 of=/home/vagrant/spdk_repo/spdk/test/ftl/nbdtest bs=4096 count=1 iflag=direct 00:25:20.802 1+0 records in 00:25:20.802 1+0 records out 00:25:20.802 4096 bytes (4.1 kB, 4.0 KiB) copied, 0.000988972 s, 4.1 MB/s 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # stat -c %s /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@890 -- # size=4096 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@891 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/nbdtest 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@892 -- # '[' 4096 '!=' 0 ']' 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@893 -- # return 0 00:25:20.802 02:20:45 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@75 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --bs=4096 --count=262144 00:25:20.802 [2024-12-15 02:20:45.456638] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:20.802 [2024-12-15 02:20:45.456793] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82082 ] 00:25:21.063 [2024-12-15 02:20:45.621825] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:21.063 [2024-12-15 02:20:45.744462] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:22.453  [2024-12-15T02:20:48.154Z] Copying: 189/1024 [MB] (189 MBps) [2024-12-15T02:20:49.090Z] Copying: 418/1024 [MB] (229 MBps) [2024-12-15T02:20:50.025Z] Copying: 676/1024 [MB] (257 MBps) [2024-12-15T02:20:50.591Z] Copying: 927/1024 [MB] (251 MBps) [2024-12-15T02:20:51.157Z] Copying: 1024/1024 [MB] (average 233 MBps) 00:25:26.392 00:25:26.392 02:20:50 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@76 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:25:28.293 02:20:52 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@77 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd -m 0x2 --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --of=/dev/nbd0 --bs=4096 --count=262144 --oflag=direct 00:25:28.293 [2024-12-15 02:20:52.655942] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:25:28.293 [2024-12-15 02:20:52.656073] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x2 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82158 ] 00:25:28.293 [2024-12-15 02:20:52.812147] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:25:28.293 [2024-12-15 02:20:52.885141] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:25:29.667  [2024-12-15T02:20:55.365Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-15T02:20:56.298Z] Copying: 43/1024 [MB] (25 MBps) [2024-12-15T02:20:57.232Z] Copying: 62/1024 [MB] (18 MBps) [2024-12-15T02:20:58.165Z] Copying: 93/1024 [MB] (31 MBps) [2024-12-15T02:20:59.100Z] Copying: 128/1024 [MB] (35 MBps) [2024-12-15T02:21:00.484Z] Copying: 164/1024 [MB] (35 MBps) [2024-12-15T02:21:01.420Z] Copying: 178108/1048576 [kB] (10092 kBps) [2024-12-15T02:21:02.353Z] Copying: 196/1024 [MB] (22 MBps) [2024-12-15T02:21:03.285Z] Copying: 230/1024 [MB] (33 MBps) [2024-12-15T02:21:04.217Z] Copying: 263/1024 [MB] (33 MBps) [2024-12-15T02:21:05.153Z] Copying: 299/1024 [MB] (36 MBps) [2024-12-15T02:21:06.100Z] Copying: 333/1024 [MB] (33 MBps) [2024-12-15T02:21:07.070Z] Copying: 350/1024 [MB] (16 MBps) [2024-12-15T02:21:08.444Z] Copying: 379/1024 [MB] (29 MBps) [2024-12-15T02:21:09.380Z] Copying: 414/1024 [MB] (34 MBps) [2024-12-15T02:21:10.321Z] Copying: 449/1024 [MB] (34 MBps) [2024-12-15T02:21:11.256Z] Copying: 469/1024 [MB] (20 MBps) [2024-12-15T02:21:12.190Z] Copying: 499/1024 [MB] (29 MBps) [2024-12-15T02:21:13.122Z] Copying: 528/1024 [MB] (29 MBps) [2024-12-15T02:21:14.495Z] Copying: 562/1024 [MB] (33 MBps) [2024-12-15T02:21:15.061Z] Copying: 597/1024 [MB] (35 MBps) [2024-12-15T02:21:16.438Z] Copying: 631/1024 [MB] (33 MBps) [2024-12-15T02:21:17.370Z] Copying: 654/1024 [MB] (22 MBps) [2024-12-15T02:21:18.304Z] Copying: 679/1024 [MB] (25 MBps) [2024-12-15T02:21:19.238Z] Copying: 715/1024 [MB] (35 MBps) [2024-12-15T02:21:20.170Z] Copying: 750/1024 [MB] (35 MBps) [2024-12-15T02:21:21.103Z] Copying: 786/1024 [MB] (35 MBps) [2024-12-15T02:21:22.476Z] Copying: 821/1024 [MB] (34 MBps) [2024-12-15T02:21:23.410Z] Copying: 852/1024 [MB] (30 MBps) [2024-12-15T02:21:24.343Z] Copying: 882/1024 [MB] (30 MBps) [2024-12-15T02:21:25.280Z] Copying: 917/1024 [MB] (34 MBps) [2024-12-15T02:21:26.212Z] Copying: 952/1024 [MB] (35 MBps) [2024-12-15T02:21:27.145Z] Copying: 986/1024 [MB] (33 MBps) [2024-12-15T02:21:27.404Z] Copying: 1016/1024 [MB] (30 MBps) [2024-12-15T02:21:27.972Z] Copying: 1024/1024 [MB] (average 29 MBps) 00:26:03.207 00:26:03.207 02:21:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@78 -- # sync /dev/nbd0 00:26:03.207 02:21:27 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nbd_stop_disk /dev/nbd0 00:26:03.468 02:21:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@80 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:26:03.730 [2024-12-15 02:21:28.253304] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.730 [2024-12-15 02:21:28.253345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:26:03.730 [2024-12-15 02:21:28.253357] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:03.730 [2024-12-15 02:21:28.253366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.730 [2024-12-15 02:21:28.253388] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:26:03.730 [2024-12-15 02:21:28.255645] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.730 [2024-12-15 02:21:28.255673] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:26:03.730 [2024-12-15 02:21:28.255683] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.242 ms 00:26:03.730 [2024-12-15 02:21:28.255690] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.730 [2024-12-15 02:21:28.258323] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.730 [2024-12-15 02:21:28.258351] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:26:03.730 [2024-12-15 02:21:28.258360] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.607 ms 00:26:03.730 [2024-12-15 02:21:28.258366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.730 [2024-12-15 02:21:28.273807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.730 [2024-12-15 02:21:28.273837] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:26:03.730 [2024-12-15 02:21:28.273847] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.424 ms 00:26:03.730 [2024-12-15 02:21:28.273854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.730 [2024-12-15 02:21:28.278461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.730 [2024-12-15 02:21:28.278483] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:26:03.730 [2024-12-15 02:21:28.278492] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.577 ms 00:26:03.730 [2024-12-15 02:21:28.278499] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.730 [2024-12-15 02:21:28.298364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.730 [2024-12-15 02:21:28.298393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:26:03.730 [2024-12-15 02:21:28.298405] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.809 ms 00:26:03.730 [2024-12-15 02:21:28.298411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.731 [2024-12-15 02:21:28.311217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.731 [2024-12-15 02:21:28.311245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:26:03.731 [2024-12-15 02:21:28.311259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.772 ms 00:26:03.731 [2024-12-15 02:21:28.311266] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.731 [2024-12-15 02:21:28.311391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.731 [2024-12-15 02:21:28.311401] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:26:03.731 [2024-12-15 02:21:28.311410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.086 ms 00:26:03.731 [2024-12-15 02:21:28.311417] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.731 [2024-12-15 02:21:28.330170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.731 [2024-12-15 02:21:28.330202] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:26:03.731 [2024-12-15 02:21:28.330212] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.738 ms 00:26:03.731 [2024-12-15 02:21:28.330217] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.731 [2024-12-15 02:21:28.348563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.731 [2024-12-15 02:21:28.348588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:26:03.731 [2024-12-15 02:21:28.348599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.316 ms 00:26:03.731 [2024-12-15 02:21:28.348604] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.731 [2024-12-15 02:21:28.366148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.731 [2024-12-15 02:21:28.366174] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:26:03.731 [2024-12-15 02:21:28.366183] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.514 ms 00:26:03.731 [2024-12-15 02:21:28.366189] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.731 [2024-12-15 02:21:28.384054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.731 [2024-12-15 02:21:28.384078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:26:03.731 [2024-12-15 02:21:28.384087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.798 ms 00:26:03.731 [2024-12-15 02:21:28.384093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.731 [2024-12-15 02:21:28.384122] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:26:03.731 [2024-12-15 02:21:28.384134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384143] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384150] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384176] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384185] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384191] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384206] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384212] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384219] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384225] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384232] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384238] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384262] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384269] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384278] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384309] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384322] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384329] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384345] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384364] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384389] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384396] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384435] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384460] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384467] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384497] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:26:03.731 [2024-12-15 02:21:28.384504] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384510] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384524] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384536] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384543] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384564] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384571] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384622] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384638] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384657] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384685] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384693] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384699] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384708] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384713] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384726] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384733] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384748] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384760] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384783] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384816] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384822] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384830] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:26:03.732 [2024-12-15 02:21:28.384842] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:26:03.732 [2024-12-15 02:21:28.384849] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c183996-2205-4a0b-bdc0-38705690ad6f 00:26:03.732 [2024-12-15 02:21:28.384855] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:26:03.732 [2024-12-15 02:21:28.384864] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:26:03.732 [2024-12-15 02:21:28.384869] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:26:03.732 [2024-12-15 02:21:28.384878] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:26:03.732 [2024-12-15 02:21:28.384884] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:26:03.732 [2024-12-15 02:21:28.384892] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:26:03.732 [2024-12-15 02:21:28.384898] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:26:03.732 [2024-12-15 02:21:28.384903] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:26:03.732 [2024-12-15 02:21:28.384908] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:26:03.732 [2024-12-15 02:21:28.384915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.732 [2024-12-15 02:21:28.384921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:26:03.732 [2024-12-15 02:21:28.384928] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.795 ms 00:26:03.732 [2024-12-15 02:21:28.384934] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.732 [2024-12-15 02:21:28.394921] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.732 [2024-12-15 02:21:28.394946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:26:03.732 [2024-12-15 02:21:28.394956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.962 ms 00:26:03.732 [2024-12-15 02:21:28.394963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.732 [2024-12-15 02:21:28.395265] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:03.732 [2024-12-15 02:21:28.395274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:26:03.732 [2024-12-15 02:21:28.395282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.284 ms 00:26:03.732 [2024-12-15 02:21:28.395287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.732 [2024-12-15 02:21:28.430038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.732 [2024-12-15 02:21:28.430073] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:03.732 [2024-12-15 02:21:28.430082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.732 [2024-12-15 02:21:28.430089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.732 [2024-12-15 02:21:28.430140] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.732 [2024-12-15 02:21:28.430147] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:03.732 [2024-12-15 02:21:28.430155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.732 [2024-12-15 02:21:28.430161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.732 [2024-12-15 02:21:28.430233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.732 [2024-12-15 02:21:28.430245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:03.732 [2024-12-15 02:21:28.430253] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.732 [2024-12-15 02:21:28.430259] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.732 [2024-12-15 02:21:28.430275] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.732 [2024-12-15 02:21:28.430283] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:03.733 [2024-12-15 02:21:28.430290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.733 [2024-12-15 02:21:28.430296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.993 [2024-12-15 02:21:28.493340] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.993 [2024-12-15 02:21:28.493374] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:03.993 [2024-12-15 02:21:28.493384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.993 [2024-12-15 02:21:28.493391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.994 [2024-12-15 02:21:28.544535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.994 [2024-12-15 02:21:28.544569] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:03.994 [2024-12-15 02:21:28.544580] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.994 [2024-12-15 02:21:28.544587] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.994 [2024-12-15 02:21:28.544663] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.994 [2024-12-15 02:21:28.544671] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:03.994 [2024-12-15 02:21:28.544682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.994 [2024-12-15 02:21:28.544688] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.994 [2024-12-15 02:21:28.544745] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.994 [2024-12-15 02:21:28.544754] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:03.994 [2024-12-15 02:21:28.544762] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.994 [2024-12-15 02:21:28.544769] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.994 [2024-12-15 02:21:28.544844] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.994 [2024-12-15 02:21:28.544852] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:03.994 [2024-12-15 02:21:28.544861] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.994 [2024-12-15 02:21:28.544869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.994 [2024-12-15 02:21:28.544896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.994 [2024-12-15 02:21:28.544903] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:26:03.994 [2024-12-15 02:21:28.544911] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.994 [2024-12-15 02:21:28.544918] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.994 [2024-12-15 02:21:28.544953] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.994 [2024-12-15 02:21:28.544960] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:03.994 [2024-12-15 02:21:28.544968] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.994 [2024-12-15 02:21:28.544976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.994 [2024-12-15 02:21:28.545019] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:26:03.994 [2024-12-15 02:21:28.545027] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:03.994 [2024-12-15 02:21:28.545035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:26:03.994 [2024-12-15 02:21:28.545042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:03.994 [2024-12-15 02:21:28.545165] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 291.821 ms, result 0 00:26:03.994 true 00:26:03.994 02:21:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@83 -- # kill -9 81939 00:26:03.994 02:21:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@84 -- # rm -f /dev/shm/spdk_tgt_trace.pid81939 00:26:03.994 02:21:28 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/dev/urandom --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --bs=4096 --count=262144 00:26:03.994 [2024-12-15 02:21:28.633188] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:03.994 [2024-12-15 02:21:28.633323] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82535 ] 00:26:04.254 [2024-12-15 02:21:28.788157] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:04.254 [2024-12-15 02:21:28.880323] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:05.719  [2024-12-15T02:21:31.427Z] Copying: 255/1024 [MB] (255 MBps) [2024-12-15T02:21:32.368Z] Copying: 514/1024 [MB] (258 MBps) [2024-12-15T02:21:33.310Z] Copying: 771/1024 [MB] (257 MBps) [2024-12-15T02:21:33.880Z] Copying: 1024/1024 [MB] (average 256 MBps) 00:26:09.115 00:26:09.115 /home/vagrant/spdk_repo/spdk/test/ftl/dirty_shutdown.sh: line 87: 81939 Killed "$SPDK_BIN_DIR/spdk_tgt" -m 0x1 00:26:09.115 02:21:33 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@88 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --ob=ftl0 --count=262144 --seek=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:26:09.115 [2024-12-15 02:21:33.732693] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:26:09.115 [2024-12-15 02:21:33.732785] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid82593 ] 00:26:09.376 [2024-12-15 02:21:33.885663] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:26:09.376 [2024-12-15 02:21:34.006415] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:26:09.638 [2024-12-15 02:21:34.342285] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.638 [2024-12-15 02:21:34.342372] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:26:09.899 [2024-12-15 02:21:34.408835] blobstore.c:4899:bs_recover: *NOTICE*: Performing recovery on blobstore 00:26:09.899 [2024-12-15 02:21:34.409462] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x0 00:26:09.899 [2024-12-15 02:21:34.409973] blobstore.c:4846:bs_load_replay_md_cpl: *NOTICE*: Recover: blob 0x1 00:26:10.161 [2024-12-15 02:21:34.847048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.847079] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:26:10.161 [2024-12-15 02:21:34.847091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.161 [2024-12-15 02:21:34.847099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.847138] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.847146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:26:10.161 [2024-12-15 02:21:34.847152] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:26:10.161 [2024-12-15 02:21:34.847158] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.847171] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:26:10.161 [2024-12-15 02:21:34.847699] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:26:10.161 [2024-12-15 02:21:34.847713] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.847719] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:26:10.161 [2024-12-15 02:21:34.847726] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.545 ms 00:26:10.161 [2024-12-15 02:21:34.847731] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.848970] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:26:10.161 [2024-12-15 02:21:34.859530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.859664] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:26:10.161 [2024-12-15 02:21:34.859678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.562 ms 00:26:10.161 [2024-12-15 02:21:34.859685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.859727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.859736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:26:10.161 [2024-12-15 02:21:34.859742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:26:10.161 [2024-12-15 02:21:34.859748] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.865904] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.865930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:26:10.161 [2024-12-15 02:21:34.865943] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.112 ms 00:26:10.161 [2024-12-15 02:21:34.865950] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.866006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.866013] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:26:10.161 [2024-12-15 02:21:34.866019] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:26:10.161 [2024-12-15 02:21:34.866025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.866061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.866068] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:26:10.161 [2024-12-15 02:21:34.866075] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:26:10.161 [2024-12-15 02:21:34.866080] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.866095] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:26:10.161 [2024-12-15 02:21:34.869009] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.869031] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:26:10.161 [2024-12-15 02:21:34.869038] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.918 ms 00:26:10.161 [2024-12-15 02:21:34.869044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.869074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.161 [2024-12-15 02:21:34.869081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:26:10.161 [2024-12-15 02:21:34.869087] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:26:10.161 [2024-12-15 02:21:34.869093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.161 [2024-12-15 02:21:34.869110] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:26:10.161 [2024-12-15 02:21:34.869128] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:26:10.161 [2024-12-15 02:21:34.869158] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:26:10.161 [2024-12-15 02:21:34.869170] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:26:10.162 [2024-12-15 02:21:34.869264] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:26:10.162 [2024-12-15 02:21:34.869274] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:26:10.162 [2024-12-15 02:21:34.869284] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:26:10.162 [2024-12-15 02:21:34.869295] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869302] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869309] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:26:10.162 [2024-12-15 02:21:34.869316] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:26:10.162 [2024-12-15 02:21:34.869321] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:26:10.162 [2024-12-15 02:21:34.869327] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:26:10.162 [2024-12-15 02:21:34.869334] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.162 [2024-12-15 02:21:34.869340] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:26:10.162 [2024-12-15 02:21:34.869346] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:26:10.162 [2024-12-15 02:21:34.869352] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.162 [2024-12-15 02:21:34.869414] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.162 [2024-12-15 02:21:34.869424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:26:10.162 [2024-12-15 02:21:34.869430] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:26:10.162 [2024-12-15 02:21:34.869435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.162 [2024-12-15 02:21:34.869509] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:26:10.162 [2024-12-15 02:21:34.869517] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:26:10.162 [2024-12-15 02:21:34.869523] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869529] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869535] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:26:10.162 [2024-12-15 02:21:34.869542] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869548] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:26:10.162 [2024-12-15 02:21:34.869563] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869573] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.162 [2024-12-15 02:21:34.869578] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:26:10.162 [2024-12-15 02:21:34.869583] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:26:10.162 [2024-12-15 02:21:34.869588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:26:10.162 [2024-12-15 02:21:34.869594] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:26:10.162 [2024-12-15 02:21:34.869599] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:26:10.162 [2024-12-15 02:21:34.869604] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869609] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:26:10.162 [2024-12-15 02:21:34.869615] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869621] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869626] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:26:10.162 [2024-12-15 02:21:34.869632] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869637] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869642] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:26:10.162 [2024-12-15 02:21:34.869649] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869654] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:26:10.162 [2024-12-15 02:21:34.869664] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869669] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869673] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:26:10.162 [2024-12-15 02:21:34.869678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869684] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:26:10.162 [2024-12-15 02:21:34.869693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.162 [2024-12-15 02:21:34.869702] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:26:10.162 [2024-12-15 02:21:34.869707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:26:10.162 [2024-12-15 02:21:34.869712] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:26:10.162 [2024-12-15 02:21:34.869717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:26:10.162 [2024-12-15 02:21:34.869723] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:26:10.162 [2024-12-15 02:21:34.869728] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869735] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:26:10.162 [2024-12-15 02:21:34.869740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:26:10.162 [2024-12-15 02:21:34.869745] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869750] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:26:10.162 [2024-12-15 02:21:34.869764] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:26:10.162 [2024-12-15 02:21:34.869772] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869778] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:26:10.162 [2024-12-15 02:21:34.869784] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:26:10.162 [2024-12-15 02:21:34.869789] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:26:10.162 [2024-12-15 02:21:34.869794] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:26:10.162 [2024-12-15 02:21:34.869800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:26:10.162 [2024-12-15 02:21:34.869805] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:26:10.162 [2024-12-15 02:21:34.869809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:26:10.162 [2024-12-15 02:21:34.869816] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:26:10.162 [2024-12-15 02:21:34.869823] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.162 [2024-12-15 02:21:34.869830] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:26:10.162 [2024-12-15 02:21:34.869835] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:26:10.162 [2024-12-15 02:21:34.869841] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:26:10.162 [2024-12-15 02:21:34.869847] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:26:10.162 [2024-12-15 02:21:34.869854] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:26:10.162 [2024-12-15 02:21:34.869860] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:26:10.162 [2024-12-15 02:21:34.869866] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:26:10.162 [2024-12-15 02:21:34.869872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:26:10.162 [2024-12-15 02:21:34.869877] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:26:10.162 [2024-12-15 02:21:34.869882] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:26:10.162 [2024-12-15 02:21:34.869888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:26:10.162 [2024-12-15 02:21:34.869893] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:26:10.162 [2024-12-15 02:21:34.869898] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:26:10.162 [2024-12-15 02:21:34.869904] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:26:10.162 [2024-12-15 02:21:34.869911] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:26:10.163 [2024-12-15 02:21:34.869917] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:26:10.163 [2024-12-15 02:21:34.869923] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:26:10.163 [2024-12-15 02:21:34.869931] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:26:10.163 [2024-12-15 02:21:34.869937] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:26:10.163 [2024-12-15 02:21:34.869942] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:26:10.163 [2024-12-15 02:21:34.869948] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.163 [2024-12-15 02:21:34.869954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:26:10.163 [2024-12-15 02:21:34.869960] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.493 ms 00:26:10.163 [2024-12-15 02:21:34.869966] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.163 [2024-12-15 02:21:34.894086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.163 [2024-12-15 02:21:34.894232] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:26:10.163 [2024-12-15 02:21:34.894246] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.072 ms 00:26:10.163 [2024-12-15 02:21:34.894252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.163 [2024-12-15 02:21:34.894322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.163 [2024-12-15 02:21:34.894329] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:26:10.163 [2024-12-15 02:21:34.894336] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.048 ms 00:26:10.163 [2024-12-15 02:21:34.894342] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.424 [2024-12-15 02:21:34.935629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.424 [2024-12-15 02:21:34.935660] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:26:10.424 [2024-12-15 02:21:34.935672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 41.249 ms 00:26:10.424 [2024-12-15 02:21:34.935678] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.424 [2024-12-15 02:21:34.935712] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.424 [2024-12-15 02:21:34.935720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:26:10.425 [2024-12-15 02:21:34.935727] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:26:10.425 [2024-12-15 02:21:34.935733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:34.936151] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:34.936165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:26:10.425 [2024-12-15 02:21:34.936173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:26:10.425 [2024-12-15 02:21:34.936184] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:34.936312] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:34.936321] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:26:10.425 [2024-12-15 02:21:34.936328] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:26:10.425 [2024-12-15 02:21:34.936334] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:34.948095] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:34.948121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:26:10.425 [2024-12-15 02:21:34.948130] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.742 ms 00:26:10.425 [2024-12-15 02:21:34.948136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:34.958594] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:26:10.425 [2024-12-15 02:21:34.958621] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:26:10.425 [2024-12-15 02:21:34.958631] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:34.958637] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:26:10.425 [2024-12-15 02:21:34.958645] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.383 ms 00:26:10.425 [2024-12-15 02:21:34.958651] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:34.977170] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:34.977203] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:26:10.425 [2024-12-15 02:21:34.977213] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.490 ms 00:26:10.425 [2024-12-15 02:21:34.977220] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:34.986807] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:34.986832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:26:10.425 [2024-12-15 02:21:34.986840] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.556 ms 00:26:10.425 [2024-12-15 02:21:34.986845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:34.995968] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:34.995991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:26:10.425 [2024-12-15 02:21:34.995998] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 9.097 ms 00:26:10.425 [2024-12-15 02:21:34.996004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:34.996482] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:34.996493] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:26:10.425 [2024-12-15 02:21:34.996501] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.420 ms 00:26:10.425 [2024-12-15 02:21:34.996507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.044406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:35.044436] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:26:10.425 [2024-12-15 02:21:35.044446] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 47.886 ms 00:26:10.425 [2024-12-15 02:21:35.044452] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.052651] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:26:10.425 [2024-12-15 02:21:35.054866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:35.054890] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:26:10.425 [2024-12-15 02:21:35.054899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 10.382 ms 00:26:10.425 [2024-12-15 02:21:35.054910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.054959] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:35.054968] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:26:10.425 [2024-12-15 02:21:35.054974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:26:10.425 [2024-12-15 02:21:35.054980] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.055051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:35.055061] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:26:10.425 [2024-12-15 02:21:35.055068] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:26:10.425 [2024-12-15 02:21:35.055075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.055093] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:35.055099] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:26:10.425 [2024-12-15 02:21:35.055105] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:26:10.425 [2024-12-15 02:21:35.055112] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.055140] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:26:10.425 [2024-12-15 02:21:35.055148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:35.055155] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:26:10.425 [2024-12-15 02:21:35.055161] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:26:10.425 [2024-12-15 02:21:35.055170] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.074217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:35.074369] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:26:10.425 [2024-12-15 02:21:35.074383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 19.033 ms 00:26:10.425 [2024-12-15 02:21:35.074390] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.074443] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:26:10.425 [2024-12-15 02:21:35.074450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:26:10.425 [2024-12-15 02:21:35.074458] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:26:10.425 [2024-12-15 02:21:35.074464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:26:10.425 [2024-12-15 02:21:35.075411] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 227.921 ms, result 0 00:26:11.369  [2024-12-15T02:21:37.517Z] Copying: 22/1024 [MB] (22 MBps) [2024-12-15T02:21:38.090Z] Copying: 44/1024 [MB] (22 MBps) [2024-12-15T02:21:39.475Z] Copying: 66/1024 [MB] (21 MBps) [2024-12-15T02:21:40.417Z] Copying: 81/1024 [MB] (14 MBps) [2024-12-15T02:21:41.362Z] Copying: 93/1024 [MB] (11 MBps) [2024-12-15T02:21:42.306Z] Copying: 104/1024 [MB] (11 MBps) [2024-12-15T02:21:43.250Z] Copying: 115/1024 [MB] (11 MBps) [2024-12-15T02:21:44.192Z] Copying: 126/1024 [MB] (10 MBps) [2024-12-15T02:21:45.133Z] Copying: 137/1024 [MB] (11 MBps) [2024-12-15T02:21:46.518Z] Copying: 148/1024 [MB] (11 MBps) [2024-12-15T02:21:47.091Z] Copying: 160/1024 [MB] (11 MBps) [2024-12-15T02:21:48.477Z] Copying: 171/1024 [MB] (11 MBps) [2024-12-15T02:21:49.422Z] Copying: 182/1024 [MB] (11 MBps) [2024-12-15T02:21:50.367Z] Copying: 193/1024 [MB] (10 MBps) [2024-12-15T02:21:51.312Z] Copying: 204/1024 [MB] (10 MBps) [2024-12-15T02:21:52.298Z] Copying: 214/1024 [MB] (10 MBps) [2024-12-15T02:21:53.237Z] Copying: 229560/1048576 [kB] (10168 kBps) [2024-12-15T02:21:54.178Z] Copying: 239528/1048576 [kB] (9968 kBps) [2024-12-15T02:21:55.120Z] Copying: 245/1024 [MB] (11 MBps) [2024-12-15T02:21:56.503Z] Copying: 256/1024 [MB] (11 MBps) [2024-12-15T02:21:57.445Z] Copying: 267/1024 [MB] (11 MBps) [2024-12-15T02:21:58.386Z] Copying: 278/1024 [MB] (11 MBps) [2024-12-15T02:21:59.328Z] Copying: 289/1024 [MB] (10 MBps) [2024-12-15T02:22:00.270Z] Copying: 300/1024 [MB] (10 MBps) [2024-12-15T02:22:01.211Z] Copying: 310/1024 [MB] (10 MBps) [2024-12-15T02:22:02.153Z] Copying: 321/1024 [MB] (11 MBps) [2024-12-15T02:22:03.097Z] Copying: 333/1024 [MB] (11 MBps) [2024-12-15T02:22:04.485Z] Copying: 344/1024 [MB] (11 MBps) [2024-12-15T02:22:05.429Z] Copying: 355/1024 [MB] (11 MBps) [2024-12-15T02:22:06.372Z] Copying: 367/1024 [MB] (11 MBps) [2024-12-15T02:22:07.317Z] Copying: 378/1024 [MB] (11 MBps) [2024-12-15T02:22:08.260Z] Copying: 389/1024 [MB] (11 MBps) [2024-12-15T02:22:09.205Z] Copying: 401/1024 [MB] (11 MBps) [2024-12-15T02:22:10.156Z] Copying: 412/1024 [MB] (11 MBps) [2024-12-15T02:22:11.101Z] Copying: 423/1024 [MB] (11 MBps) [2024-12-15T02:22:12.489Z] Copying: 434/1024 [MB] (10 MBps) [2024-12-15T02:22:13.433Z] Copying: 445/1024 [MB] (11 MBps) [2024-12-15T02:22:14.377Z] Copying: 456/1024 [MB] (11 MBps) [2024-12-15T02:22:15.352Z] Copying: 467/1024 [MB] (10 MBps) [2024-12-15T02:22:16.296Z] Copying: 478/1024 [MB] (11 MBps) [2024-12-15T02:22:17.239Z] Copying: 489/1024 [MB] (11 MBps) [2024-12-15T02:22:18.182Z] Copying: 508/1024 [MB] (18 MBps) [2024-12-15T02:22:19.127Z] Copying: 519/1024 [MB] (11 MBps) [2024-12-15T02:22:20.514Z] Copying: 531/1024 [MB] (11 MBps) [2024-12-15T02:22:21.089Z] Copying: 542/1024 [MB] (11 MBps) [2024-12-15T02:22:22.476Z] Copying: 554/1024 [MB] (11 MBps) [2024-12-15T02:22:23.422Z] Copying: 567/1024 [MB] (13 MBps) [2024-12-15T02:22:24.365Z] Copying: 585/1024 [MB] (17 MBps) [2024-12-15T02:22:25.310Z] Copying: 596/1024 [MB] (11 MBps) [2024-12-15T02:22:26.254Z] Copying: 609/1024 [MB] (12 MBps) [2024-12-15T02:22:27.200Z] Copying: 619/1024 [MB] (10 MBps) [2024-12-15T02:22:28.143Z] Copying: 629/1024 [MB] (10 MBps) [2024-12-15T02:22:29.530Z] Copying: 644/1024 [MB] (14 MBps) [2024-12-15T02:22:30.103Z] Copying: 656/1024 [MB] (11 MBps) [2024-12-15T02:22:31.495Z] Copying: 667/1024 [MB] (11 MBps) [2024-12-15T02:22:32.439Z] Copying: 678/1024 [MB] (10 MBps) [2024-12-15T02:22:33.385Z] Copying: 704536/1048576 [kB] (10056 kBps) [2024-12-15T02:22:34.329Z] Copying: 699/1024 [MB] (11 MBps) [2024-12-15T02:22:35.271Z] Copying: 710/1024 [MB] (11 MBps) [2024-12-15T02:22:36.213Z] Copying: 723/1024 [MB] (13 MBps) [2024-12-15T02:22:37.159Z] Copying: 737/1024 [MB] (13 MBps) [2024-12-15T02:22:38.131Z] Copying: 750/1024 [MB] (13 MBps) [2024-12-15T02:22:39.528Z] Copying: 761/1024 [MB] (11 MBps) [2024-12-15T02:22:40.100Z] Copying: 772/1024 [MB] (10 MBps) [2024-12-15T02:22:41.487Z] Copying: 782/1024 [MB] (10 MBps) [2024-12-15T02:22:42.432Z] Copying: 793/1024 [MB] (11 MBps) [2024-12-15T02:22:43.377Z] Copying: 804/1024 [MB] (10 MBps) [2024-12-15T02:22:44.321Z] Copying: 814/1024 [MB] (10 MBps) [2024-12-15T02:22:45.269Z] Copying: 824/1024 [MB] (10 MBps) [2024-12-15T02:22:46.216Z] Copying: 836/1024 [MB] (11 MBps) [2024-12-15T02:22:47.162Z] Copying: 847/1024 [MB] (11 MBps) [2024-12-15T02:22:48.106Z] Copying: 858/1024 [MB] (10 MBps) [2024-12-15T02:22:49.494Z] Copying: 869/1024 [MB] (11 MBps) [2024-12-15T02:22:50.438Z] Copying: 880/1024 [MB] (11 MBps) [2024-12-15T02:22:51.383Z] Copying: 891/1024 [MB] (11 MBps) [2024-12-15T02:22:52.327Z] Copying: 902/1024 [MB] (10 MBps) [2024-12-15T02:22:53.273Z] Copying: 913/1024 [MB] (10 MBps) [2024-12-15T02:22:54.216Z] Copying: 923/1024 [MB] (10 MBps) [2024-12-15T02:22:55.151Z] Copying: 936/1024 [MB] (12 MBps) [2024-12-15T02:22:56.095Z] Copying: 979/1024 [MB] (43 MBps) [2024-12-15T02:22:57.481Z] Copying: 992/1024 [MB] (12 MBps) [2024-12-15T02:22:58.423Z] Copying: 1005/1024 [MB] (13 MBps) [2024-12-15T02:22:58.994Z] Copying: 1023/1024 [MB] (17 MBps) [2024-12-15T02:22:58.994Z] Copying: 1024/1024 [MB] (average 12 MBps)[2024-12-15 02:22:58.987716] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.229 [2024-12-15 02:22:58.988287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:27:34.229 [2024-12-15 02:22:58.988321] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:34.229 [2024-12-15 02:22:58.988335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.229 [2024-12-15 02:22:58.988377] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:27:34.490 [2024-12-15 02:22:58.994923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.490 [2024-12-15 02:22:58.994971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:27:34.490 [2024-12-15 02:22:58.994984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.523 ms 00:27:34.490 [2024-12-15 02:22:58.995002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.490 [2024-12-15 02:22:59.004951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.490 [2024-12-15 02:22:59.005003] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:27:34.490 [2024-12-15 02:22:59.005015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.819 ms 00:27:34.490 [2024-12-15 02:22:59.005024] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.490 [2024-12-15 02:22:59.028611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.490 [2024-12-15 02:22:59.028681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:27:34.490 [2024-12-15 02:22:59.028693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.567 ms 00:27:34.490 [2024-12-15 02:22:59.028702] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.490 [2024-12-15 02:22:59.034841] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.490 [2024-12-15 02:22:59.035028] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:27:34.490 [2024-12-15 02:22:59.035050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.090 ms 00:27:34.490 [2024-12-15 02:22:59.035059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.490 [2024-12-15 02:22:59.062102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.490 [2024-12-15 02:22:59.062315] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:27:34.490 [2024-12-15 02:22:59.062338] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.001 ms 00:27:34.490 [2024-12-15 02:22:59.062347] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.490 [2024-12-15 02:22:59.077856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.490 [2024-12-15 02:22:59.078044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:27:34.490 [2024-12-15 02:22:59.078066] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.466 ms 00:27:34.490 [2024-12-15 02:22:59.078075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.753 [2024-12-15 02:22:59.377022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.753 [2024-12-15 02:22:59.377257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:27:34.753 [2024-12-15 02:22:59.377294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 298.879 ms 00:27:34.753 [2024-12-15 02:22:59.377304] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.753 [2024-12-15 02:22:59.404021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.753 [2024-12-15 02:22:59.404071] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:27:34.753 [2024-12-15 02:22:59.404086] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.691 ms 00:27:34.753 [2024-12-15 02:22:59.404107] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.753 [2024-12-15 02:22:59.429669] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.753 [2024-12-15 02:22:59.429717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:27:34.753 [2024-12-15 02:22:59.429730] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.510 ms 00:27:34.753 [2024-12-15 02:22:59.429738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.753 [2024-12-15 02:22:59.454736] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.753 [2024-12-15 02:22:59.454782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:27:34.753 [2024-12-15 02:22:59.454795] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.950 ms 00:27:34.753 [2024-12-15 02:22:59.454803] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.753 [2024-12-15 02:22:59.479670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.753 [2024-12-15 02:22:59.479717] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:27:34.753 [2024-12-15 02:22:59.479729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.792 ms 00:27:34.753 [2024-12-15 02:22:59.479737] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.753 [2024-12-15 02:22:59.479784] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:27:34.753 [2024-12-15 02:22:59.479800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 106752 / 261120 wr_cnt: 1 state: open 00:27:34.753 [2024-12-15 02:22:59.479811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479855] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479864] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479879] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479888] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479905] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479913] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479921] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479928] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479936] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479943] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479965] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479973] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.479995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480012] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480019] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480027] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480057] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480080] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480087] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480165] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480180] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480247] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480255] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:27:34.753 [2024-12-15 02:22:59.480281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480316] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480375] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480483] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480491] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480498] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480506] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480514] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480532] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480553] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480607] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480628] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480654] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480663] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480671] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:27:34.754 [2024-12-15 02:22:59.480687] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:27:34.754 [2024-12-15 02:22:59.480695] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c183996-2205-4a0b-bdc0-38705690ad6f 00:27:34.754 [2024-12-15 02:22:59.480717] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 106752 00:27:34.754 [2024-12-15 02:22:59.480725] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 107712 00:27:34.754 [2024-12-15 02:22:59.480733] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 106752 00:27:34.754 [2024-12-15 02:22:59.480743] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0090 00:27:34.754 [2024-12-15 02:22:59.480750] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:27:34.754 [2024-12-15 02:22:59.480758] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:27:34.754 [2024-12-15 02:22:59.480766] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:27:34.754 [2024-12-15 02:22:59.480773] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:27:34.754 [2024-12-15 02:22:59.480780] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:27:34.754 [2024-12-15 02:22:59.480788] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.754 [2024-12-15 02:22:59.480796] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:27:34.754 [2024-12-15 02:22:59.480804] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.005 ms 00:27:34.754 [2024-12-15 02:22:59.480812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.754 [2024-12-15 02:22:59.494429] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.754 [2024-12-15 02:22:59.494473] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:27:34.754 [2024-12-15 02:22:59.494485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.597 ms 00:27:34.754 [2024-12-15 02:22:59.494494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:34.754 [2024-12-15 02:22:59.494896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:34.754 [2024-12-15 02:22:59.494907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:27:34.754 [2024-12-15 02:22:59.494917] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.365 ms 00:27:34.754 [2024-12-15 02:22:59.494932] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.531597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.531648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:35.015 [2024-12-15 02:22:59.531661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.531671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.531742] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.531752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:35.015 [2024-12-15 02:22:59.531761] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.531776] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.531845] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.531857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:35.015 [2024-12-15 02:22:59.531867] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.531876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.531894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.531904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:35.015 [2024-12-15 02:22:59.531913] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.531922] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.615945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.616001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:35.015 [2024-12-15 02:22:59.616015] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.616025] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.685639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.685697] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:35.015 [2024-12-15 02:22:59.685709] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.685724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.685804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.685814] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:35.015 [2024-12-15 02:22:59.685823] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.685845] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.685886] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.685896] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:35.015 [2024-12-15 02:22:59.685906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.685914] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.686021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.686033] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:35.015 [2024-12-15 02:22:59.686042] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.686051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.686085] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.686095] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:27:35.015 [2024-12-15 02:22:59.686103] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.686111] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.686158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.686168] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:35.015 [2024-12-15 02:22:59.686176] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.686185] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.686270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:27:35.015 [2024-12-15 02:22:59.686282] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:35.015 [2024-12-15 02:22:59.686291] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:27:35.015 [2024-12-15 02:22:59.686299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:35.015 [2024-12-15 02:22:59.686465] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 698.753 ms, result 0 00:27:36.479 00:27:36.479 00:27:36.479 02:23:01 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@90 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:27:39.025 02:23:03 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@93 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --count=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:27:39.025 [2024-12-15 02:23:03.350431] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:27:39.025 [2024-12-15 02:23:03.350551] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83505 ] 00:27:39.025 [2024-12-15 02:23:03.511831] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:27:39.025 [2024-12-15 02:23:03.624962] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:27:39.285 [2024-12-15 02:23:03.920722] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:39.285 [2024-12-15 02:23:03.920813] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:27:39.546 [2024-12-15 02:23:04.086185] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.546 [2024-12-15 02:23:04.086266] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:27:39.546 [2024-12-15 02:23:04.086282] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:27:39.546 [2024-12-15 02:23:04.086291] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.086348] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.086361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:27:39.547 [2024-12-15 02:23:04.086370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:27:39.547 [2024-12-15 02:23:04.086379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.086401] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:27:39.547 [2024-12-15 02:23:04.087294] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:27:39.547 [2024-12-15 02:23:04.087390] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.087400] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:27:39.547 [2024-12-15 02:23:04.087410] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.994 ms 00:27:39.547 [2024-12-15 02:23:04.087418] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.089139] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:27:39.547 [2024-12-15 02:23:04.103563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.103610] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:27:39.547 [2024-12-15 02:23:04.103624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.427 ms 00:27:39.547 [2024-12-15 02:23:04.103633] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.103721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.103732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:27:39.547 [2024-12-15 02:23:04.103741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:27:39.547 [2024-12-15 02:23:04.103749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.112345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.112387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:27:39.547 [2024-12-15 02:23:04.112404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.514 ms 00:27:39.547 [2024-12-15 02:23:04.112413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.112496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.112506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:27:39.547 [2024-12-15 02:23:04.112515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:27:39.547 [2024-12-15 02:23:04.112523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.112568] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.112578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:27:39.547 [2024-12-15 02:23:04.112586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:27:39.547 [2024-12-15 02:23:04.112598] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.112623] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:27:39.547 [2024-12-15 02:23:04.116725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.116766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:27:39.547 [2024-12-15 02:23:04.116777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.109 ms 00:27:39.547 [2024-12-15 02:23:04.116785] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.116824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.116833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:27:39.547 [2024-12-15 02:23:04.116842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:27:39.547 [2024-12-15 02:23:04.116851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.116904] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:27:39.547 [2024-12-15 02:23:04.116929] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:27:39.547 [2024-12-15 02:23:04.116970] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:27:39.547 [2024-12-15 02:23:04.116986] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:27:39.547 [2024-12-15 02:23:04.117092] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:27:39.547 [2024-12-15 02:23:04.117104] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:27:39.547 [2024-12-15 02:23:04.117115] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:27:39.547 [2024-12-15 02:23:04.117126] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:27:39.547 [2024-12-15 02:23:04.117136] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:27:39.547 [2024-12-15 02:23:04.117145] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:27:39.547 [2024-12-15 02:23:04.117154] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:27:39.547 [2024-12-15 02:23:04.117165] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:27:39.547 [2024-12-15 02:23:04.117173] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:27:39.547 [2024-12-15 02:23:04.117181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.117189] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:27:39.547 [2024-12-15 02:23:04.117220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.280 ms 00:27:39.547 [2024-12-15 02:23:04.117229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.117314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.547 [2024-12-15 02:23:04.117323] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:27:39.547 [2024-12-15 02:23:04.117331] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:27:39.547 [2024-12-15 02:23:04.117339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.547 [2024-12-15 02:23:04.117442] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:27:39.547 [2024-12-15 02:23:04.117453] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:27:39.547 [2024-12-15 02:23:04.117462] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:39.547 [2024-12-15 02:23:04.117470] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.547 [2024-12-15 02:23:04.117479] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:27:39.547 [2024-12-15 02:23:04.117486] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:27:39.547 [2024-12-15 02:23:04.117493] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:27:39.547 [2024-12-15 02:23:04.117501] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:27:39.547 [2024-12-15 02:23:04.117508] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:27:39.547 [2024-12-15 02:23:04.117515] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:39.547 [2024-12-15 02:23:04.117522] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:27:39.547 [2024-12-15 02:23:04.117529] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:27:39.547 [2024-12-15 02:23:04.117538] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:27:39.547 [2024-12-15 02:23:04.117553] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:27:39.547 [2024-12-15 02:23:04.117561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:27:39.547 [2024-12-15 02:23:04.117568] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.547 [2024-12-15 02:23:04.117575] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:27:39.547 [2024-12-15 02:23:04.117582] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:27:39.547 [2024-12-15 02:23:04.117588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.547 [2024-12-15 02:23:04.117595] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:27:39.547 [2024-12-15 02:23:04.117602] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:27:39.547 [2024-12-15 02:23:04.117609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.547 [2024-12-15 02:23:04.117616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:27:39.547 [2024-12-15 02:23:04.117622] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:27:39.547 [2024-12-15 02:23:04.117628] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.547 [2024-12-15 02:23:04.117635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:27:39.547 [2024-12-15 02:23:04.117642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:27:39.547 [2024-12-15 02:23:04.117648] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.547 [2024-12-15 02:23:04.117656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:27:39.547 [2024-12-15 02:23:04.117663] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:27:39.548 [2024-12-15 02:23:04.117670] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:27:39.548 [2024-12-15 02:23:04.117676] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:27:39.548 [2024-12-15 02:23:04.117683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:27:39.548 [2024-12-15 02:23:04.117690] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:39.548 [2024-12-15 02:23:04.117696] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:27:39.548 [2024-12-15 02:23:04.117703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:27:39.548 [2024-12-15 02:23:04.117710] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:27:39.548 [2024-12-15 02:23:04.117717] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:27:39.548 [2024-12-15 02:23:04.117724] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:27:39.548 [2024-12-15 02:23:04.117731] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.548 [2024-12-15 02:23:04.117738] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:27:39.548 [2024-12-15 02:23:04.117745] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:27:39.548 [2024-12-15 02:23:04.117752] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.548 [2024-12-15 02:23:04.117758] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:27:39.548 [2024-12-15 02:23:04.117768] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:27:39.548 [2024-12-15 02:23:04.117776] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:27:39.548 [2024-12-15 02:23:04.117784] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:27:39.548 [2024-12-15 02:23:04.117791] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:27:39.548 [2024-12-15 02:23:04.117798] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:27:39.548 [2024-12-15 02:23:04.117805] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:27:39.548 [2024-12-15 02:23:04.117813] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:27:39.548 [2024-12-15 02:23:04.117820] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:27:39.548 [2024-12-15 02:23:04.117827] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:27:39.548 [2024-12-15 02:23:04.117860] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:27:39.548 [2024-12-15 02:23:04.117873] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.548 [2024-12-15 02:23:04.117884] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:27:39.548 [2024-12-15 02:23:04.117891] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:27:39.548 [2024-12-15 02:23:04.117900] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:27:39.548 [2024-12-15 02:23:04.117907] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:27:39.548 [2024-12-15 02:23:04.117915] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:27:39.548 [2024-12-15 02:23:04.117922] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:27:39.548 [2024-12-15 02:23:04.117930] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:27:39.548 [2024-12-15 02:23:04.117938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:27:39.548 [2024-12-15 02:23:04.117945] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:27:39.548 [2024-12-15 02:23:04.117953] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:27:39.548 [2024-12-15 02:23:04.117960] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:27:39.548 [2024-12-15 02:23:04.117968] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:27:39.548 [2024-12-15 02:23:04.117975] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:27:39.548 [2024-12-15 02:23:04.117983] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:27:39.548 [2024-12-15 02:23:04.117989] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:27:39.548 [2024-12-15 02:23:04.117998] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:27:39.548 [2024-12-15 02:23:04.118006] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:27:39.548 [2024-12-15 02:23:04.118013] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:27:39.548 [2024-12-15 02:23:04.118020] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:27:39.548 [2024-12-15 02:23:04.118027] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:27:39.548 [2024-12-15 02:23:04.118035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.118046] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:27:39.548 [2024-12-15 02:23:04.118055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.661 ms 00:27:39.548 [2024-12-15 02:23:04.118063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.149703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.149752] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:27:39.548 [2024-12-15 02:23:04.149769] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.594 ms 00:27:39.548 [2024-12-15 02:23:04.149778] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.149875] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.149884] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:27:39.548 [2024-12-15 02:23:04.149893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:27:39.548 [2024-12-15 02:23:04.149905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.194740] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.194793] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:27:39.548 [2024-12-15 02:23:04.194807] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 44.774 ms 00:27:39.548 [2024-12-15 02:23:04.194816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.194870] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.194885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:27:39.548 [2024-12-15 02:23:04.194895] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:27:39.548 [2024-12-15 02:23:04.194904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.195538] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.195581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:27:39.548 [2024-12-15 02:23:04.195593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.554 ms 00:27:39.548 [2024-12-15 02:23:04.195601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.195756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.195771] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:27:39.548 [2024-12-15 02:23:04.195780] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.123 ms 00:27:39.548 [2024-12-15 02:23:04.195788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.211757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.211805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:27:39.548 [2024-12-15 02:23:04.211817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.947 ms 00:27:39.548 [2024-12-15 02:23:04.211825] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.226243] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:27:39.548 [2024-12-15 02:23:04.226293] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:27:39.548 [2024-12-15 02:23:04.226307] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.226317] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:27:39.548 [2024-12-15 02:23:04.226327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.369 ms 00:27:39.548 [2024-12-15 02:23:04.226335] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.548 [2024-12-15 02:23:04.252061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.548 [2024-12-15 02:23:04.252269] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:27:39.549 [2024-12-15 02:23:04.252293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.669 ms 00:27:39.549 [2024-12-15 02:23:04.252301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.549 [2024-12-15 02:23:04.265373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.549 [2024-12-15 02:23:04.265424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:27:39.549 [2024-12-15 02:23:04.265436] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.947 ms 00:27:39.549 [2024-12-15 02:23:04.265444] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.549 [2024-12-15 02:23:04.277604] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.549 [2024-12-15 02:23:04.277647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:27:39.549 [2024-12-15 02:23:04.277659] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.112 ms 00:27:39.549 [2024-12-15 02:23:04.277667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.549 [2024-12-15 02:23:04.278385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.549 [2024-12-15 02:23:04.278413] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:27:39.549 [2024-12-15 02:23:04.278424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.604 ms 00:27:39.549 [2024-12-15 02:23:04.278433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.809 [2024-12-15 02:23:04.343054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.809 [2024-12-15 02:23:04.343131] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:27:39.809 [2024-12-15 02:23:04.343147] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 64.599 ms 00:27:39.809 [2024-12-15 02:23:04.343157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.809 [2024-12-15 02:23:04.354560] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:27:39.809 [2024-12-15 02:23:04.357563] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.809 [2024-12-15 02:23:04.357609] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:27:39.809 [2024-12-15 02:23:04.357622] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.137 ms 00:27:39.809 [2024-12-15 02:23:04.357631] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.809 [2024-12-15 02:23:04.357722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.809 [2024-12-15 02:23:04.357734] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:27:39.809 [2024-12-15 02:23:04.357746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.018 ms 00:27:39.809 [2024-12-15 02:23:04.357754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.809 [2024-12-15 02:23:04.359664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.809 [2024-12-15 02:23:04.359818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:27:39.809 [2024-12-15 02:23:04.359880] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.869 ms 00:27:39.809 [2024-12-15 02:23:04.359904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.810 [2024-12-15 02:23:04.359950] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.810 [2024-12-15 02:23:04.359972] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:27:39.810 [2024-12-15 02:23:04.359993] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:27:39.810 [2024-12-15 02:23:04.360021] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.810 [2024-12-15 02:23:04.360074] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:27:39.810 [2024-12-15 02:23:04.360100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.810 [2024-12-15 02:23:04.360164] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:27:39.810 [2024-12-15 02:23:04.360188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:27:39.810 [2024-12-15 02:23:04.360227] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.810 [2024-12-15 02:23:04.386030] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.810 [2024-12-15 02:23:04.386221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:27:39.810 [2024-12-15 02:23:04.386294] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.760 ms 00:27:39.810 [2024-12-15 02:23:04.386317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.810 [2024-12-15 02:23:04.386406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:27:39.810 [2024-12-15 02:23:04.386431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:27:39.810 [2024-12-15 02:23:04.386452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.041 ms 00:27:39.810 [2024-12-15 02:23:04.386462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:27:39.810 [2024-12-15 02:23:04.387757] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 301.083 ms, result 0 00:27:41.193  [2024-12-15T02:23:06.902Z] Copying: 996/1048576 [kB] (996 kBps) [2024-12-15T02:23:07.843Z] Copying: 4496/1048576 [kB] (3500 kBps) [2024-12-15T02:23:08.787Z] Copying: 18/1024 [MB] (14 MBps) [2024-12-15T02:23:09.728Z] Copying: 49/1024 [MB] (30 MBps) [2024-12-15T02:23:10.665Z] Copying: 73/1024 [MB] (23 MBps) [2024-12-15T02:23:11.604Z] Copying: 110/1024 [MB] (37 MBps) [2024-12-15T02:23:12.984Z] Copying: 141/1024 [MB] (31 MBps) [2024-12-15T02:23:13.927Z] Copying: 171/1024 [MB] (30 MBps) [2024-12-15T02:23:14.872Z] Copying: 198/1024 [MB] (26 MBps) [2024-12-15T02:23:15.815Z] Copying: 225/1024 [MB] (27 MBps) [2024-12-15T02:23:16.755Z] Copying: 246/1024 [MB] (20 MBps) [2024-12-15T02:23:17.700Z] Copying: 268/1024 [MB] (22 MBps) [2024-12-15T02:23:18.644Z] Copying: 288/1024 [MB] (19 MBps) [2024-12-15T02:23:19.589Z] Copying: 316/1024 [MB] (28 MBps) [2024-12-15T02:23:20.969Z] Copying: 339/1024 [MB] (22 MBps) [2024-12-15T02:23:21.913Z] Copying: 368/1024 [MB] (28 MBps) [2024-12-15T02:23:22.857Z] Copying: 415/1024 [MB] (47 MBps) [2024-12-15T02:23:23.799Z] Copying: 439/1024 [MB] (23 MBps) [2024-12-15T02:23:24.783Z] Copying: 461/1024 [MB] (22 MBps) [2024-12-15T02:23:25.725Z] Copying: 484/1024 [MB] (22 MBps) [2024-12-15T02:23:26.670Z] Copying: 514/1024 [MB] (30 MBps) [2024-12-15T02:23:27.616Z] Copying: 533/1024 [MB] (18 MBps) [2024-12-15T02:23:29.005Z] Copying: 552/1024 [MB] (18 MBps) [2024-12-15T02:23:29.579Z] Copying: 578/1024 [MB] (25 MBps) [2024-12-15T02:23:30.967Z] Copying: 607/1024 [MB] (29 MBps) [2024-12-15T02:23:31.912Z] Copying: 637/1024 [MB] (29 MBps) [2024-12-15T02:23:32.856Z] Copying: 663/1024 [MB] (26 MBps) [2024-12-15T02:23:33.801Z] Copying: 693/1024 [MB] (30 MBps) [2024-12-15T02:23:34.747Z] Copying: 719/1024 [MB] (26 MBps) [2024-12-15T02:23:35.692Z] Copying: 735/1024 [MB] (16 MBps) [2024-12-15T02:23:36.637Z] Copying: 766/1024 [MB] (30 MBps) [2024-12-15T02:23:37.583Z] Copying: 787/1024 [MB] (21 MBps) [2024-12-15T02:23:38.972Z] Copying: 817/1024 [MB] (30 MBps) [2024-12-15T02:23:39.916Z] Copying: 839/1024 [MB] (21 MBps) [2024-12-15T02:23:40.861Z] Copying: 866/1024 [MB] (26 MBps) [2024-12-15T02:23:41.806Z] Copying: 897/1024 [MB] (31 MBps) [2024-12-15T02:23:42.748Z] Copying: 926/1024 [MB] (28 MBps) [2024-12-15T02:23:43.693Z] Copying: 953/1024 [MB] (27 MBps) [2024-12-15T02:23:44.638Z] Copying: 970/1024 [MB] (17 MBps) [2024-12-15T02:23:45.587Z] Copying: 997/1024 [MB] (27 MBps) [2024-12-15T02:23:46.159Z] Copying: 1014/1024 [MB] (16 MBps) [2024-12-15T02:23:46.733Z] Copying: 1024/1024 [MB] (average 24 MBps)[2024-12-15 02:23:46.523592] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.523689] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:28:21.968 [2024-12-15 02:23:46.523714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:21.968 [2024-12-15 02:23:46.523730] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.523769] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:28:21.968 [2024-12-15 02:23:46.528146] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.528212] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:28:21.968 [2024-12-15 02:23:46.528225] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.349 ms 00:28:21.968 [2024-12-15 02:23:46.528234] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.528489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.528503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:28:21.968 [2024-12-15 02:23:46.528513] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.216 ms 00:28:21.968 [2024-12-15 02:23:46.528522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.543061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.543121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:28:21.968 [2024-12-15 02:23:46.543134] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.519 ms 00:28:21.968 [2024-12-15 02:23:46.543145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.549347] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.549421] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:28:21.968 [2024-12-15 02:23:46.549433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.170 ms 00:28:21.968 [2024-12-15 02:23:46.549442] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.577038] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.577303] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:28:21.968 [2024-12-15 02:23:46.577327] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.539 ms 00:28:21.968 [2024-12-15 02:23:46.577336] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.593603] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.593653] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:28:21.968 [2024-12-15 02:23:46.593666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.220 ms 00:28:21.968 [2024-12-15 02:23:46.593675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.598324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.598375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:28:21.968 [2024-12-15 02:23:46.598386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.589 ms 00:28:21.968 [2024-12-15 02:23:46.598403] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.625394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.625444] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:28:21.968 [2024-12-15 02:23:46.625456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.973 ms 00:28:21.968 [2024-12-15 02:23:46.625464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.968 [2024-12-15 02:23:46.651811] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.968 [2024-12-15 02:23:46.651861] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:28:21.969 [2024-12-15 02:23:46.651873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.295 ms 00:28:21.969 [2024-12-15 02:23:46.651881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.969 [2024-12-15 02:23:46.677720] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.969 [2024-12-15 02:23:46.677769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:28:21.969 [2024-12-15 02:23:46.677783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.788 ms 00:28:21.969 [2024-12-15 02:23:46.677791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.969 [2024-12-15 02:23:46.703256] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.969 [2024-12-15 02:23:46.703464] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:28:21.969 [2024-12-15 02:23:46.703486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.351 ms 00:28:21.969 [2024-12-15 02:23:46.703495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.969 [2024-12-15 02:23:46.703752] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:28:21.969 [2024-12-15 02:23:46.703790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:28:21.969 [2024-12-15 02:23:46.703805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:28:21.969 [2024-12-15 02:23:46.703814] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703823] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703831] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703840] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703849] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703856] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703874] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703899] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703940] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703950] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703974] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703981] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.703996] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704003] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704039] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704056] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704085] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704110] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704118] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704126] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704134] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704141] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704149] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704157] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704166] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704174] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704183] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704230] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704240] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704272] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704288] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:28:21.969 [2024-12-15 02:23:46.704296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704315] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704331] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704339] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704347] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704355] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704371] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704380] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704388] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704397] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704406] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704421] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704429] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704436] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704452] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704470] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704485] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704500] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704522] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704529] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704538] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704554] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704561] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704568] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704575] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704583] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704590] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704644] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704652] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:28:21.970 [2024-12-15 02:23:46.704669] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:28:21.970 [2024-12-15 02:23:46.704679] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c183996-2205-4a0b-bdc0-38705690ad6f 00:28:21.970 [2024-12-15 02:23:46.704687] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:28:21.970 [2024-12-15 02:23:46.704701] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 157888 00:28:21.970 [2024-12-15 02:23:46.704709] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 155904 00:28:21.970 [2024-12-15 02:23:46.704719] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0127 00:28:21.970 [2024-12-15 02:23:46.704726] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:28:21.970 [2024-12-15 02:23:46.704742] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:28:21.970 [2024-12-15 02:23:46.704750] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:28:21.970 [2024-12-15 02:23:46.704757] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:28:21.970 [2024-12-15 02:23:46.704764] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:28:21.970 [2024-12-15 02:23:46.704772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.970 [2024-12-15 02:23:46.704780] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:28:21.970 [2024-12-15 02:23:46.704789] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.025 ms 00:28:21.970 [2024-12-15 02:23:46.704798] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.970 [2024-12-15 02:23:46.718715] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.970 [2024-12-15 02:23:46.718909] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:28:21.970 [2024-12-15 02:23:46.718927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.879 ms 00:28:21.970 [2024-12-15 02:23:46.718936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:21.970 [2024-12-15 02:23:46.719362] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:21.970 [2024-12-15 02:23:46.719376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:28:21.970 [2024-12-15 02:23:46.719386] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:28:21.970 [2024-12-15 02:23:46.719402] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.756385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.756437] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:22.262 [2024-12-15 02:23:46.756449] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.756458] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.756527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.756537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:22.262 [2024-12-15 02:23:46.756547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.756563] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.756648] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.756662] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:22.262 [2024-12-15 02:23:46.756671] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.756679] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.756695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.756706] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:22.262 [2024-12-15 02:23:46.756714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.756723] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.842905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.842963] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:22.262 [2024-12-15 02:23:46.842977] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.842985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.913756] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.914011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:22.262 [2024-12-15 02:23:46.914032] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.914042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.914116] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.914126] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:22.262 [2024-12-15 02:23:46.914136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.914144] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.914232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.914245] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:22.262 [2024-12-15 02:23:46.914254] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.914263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.914374] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.914392] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:22.262 [2024-12-15 02:23:46.914402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.914411] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.262 [2024-12-15 02:23:46.914442] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.262 [2024-12-15 02:23:46.914453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:28:22.262 [2024-12-15 02:23:46.914461] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.262 [2024-12-15 02:23:46.914469] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.263 [2024-12-15 02:23:46.914510] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.263 [2024-12-15 02:23:46.914525] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:22.263 [2024-12-15 02:23:46.914533] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.263 [2024-12-15 02:23:46.914541] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.263 [2024-12-15 02:23:46.914587] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:28:22.263 [2024-12-15 02:23:46.914598] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:22.263 [2024-12-15 02:23:46.914607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:28:22.263 [2024-12-15 02:23:46.914616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:22.263 [2024-12-15 02:23:46.914750] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 391.147 ms, result 0 00:28:23.230 00:28:23.230 00:28:23.230 02:23:47 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@94 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:28:25.781 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:28:25.781 02:23:49 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@95 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile2 --count=262144 --skip=262144 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:28:25.781 [2024-12-15 02:23:50.001714] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:28:25.781 [2024-12-15 02:23:50.002056] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid83981 ] 00:28:25.781 [2024-12-15 02:23:50.167985] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:28:25.781 [2024-12-15 02:23:50.287935] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:28:26.043 [2024-12-15 02:23:50.586607] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:26.043 [2024-12-15 02:23:50.586693] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:28:26.043 [2024-12-15 02:23:50.749389] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.043 [2024-12-15 02:23:50.749453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:28:26.043 [2024-12-15 02:23:50.749469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:28:26.043 [2024-12-15 02:23:50.749479] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.043 [2024-12-15 02:23:50.749536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.043 [2024-12-15 02:23:50.749550] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:28:26.043 [2024-12-15 02:23:50.749559] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:28:26.043 [2024-12-15 02:23:50.749567] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.043 [2024-12-15 02:23:50.749589] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:28:26.043 [2024-12-15 02:23:50.750388] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:28:26.043 [2024-12-15 02:23:50.750423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.043 [2024-12-15 02:23:50.750432] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:28:26.043 [2024-12-15 02:23:50.750442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.840 ms 00:28:26.043 [2024-12-15 02:23:50.750450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.043 [2024-12-15 02:23:50.752214] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:28:26.043 [2024-12-15 02:23:50.766704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.043 [2024-12-15 02:23:50.766757] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:28:26.043 [2024-12-15 02:23:50.766772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.509 ms 00:28:26.043 [2024-12-15 02:23:50.766780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.043 [2024-12-15 02:23:50.766866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.043 [2024-12-15 02:23:50.766877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:28:26.043 [2024-12-15 02:23:50.766887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.028 ms 00:28:26.043 [2024-12-15 02:23:50.766895] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.043 [2024-12-15 02:23:50.775273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.044 [2024-12-15 02:23:50.775312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:28:26.044 [2024-12-15 02:23:50.775324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.298 ms 00:28:26.044 [2024-12-15 02:23:50.775339] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.044 [2024-12-15 02:23:50.775422] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.044 [2024-12-15 02:23:50.775431] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:28:26.044 [2024-12-15 02:23:50.775441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:26.044 [2024-12-15 02:23:50.775449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.044 [2024-12-15 02:23:50.775493] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.044 [2024-12-15 02:23:50.775505] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:28:26.044 [2024-12-15 02:23:50.775514] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.008 ms 00:28:26.044 [2024-12-15 02:23:50.775523] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.044 [2024-12-15 02:23:50.775551] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:28:26.044 [2024-12-15 02:23:50.779649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.044 [2024-12-15 02:23:50.779690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:28:26.044 [2024-12-15 02:23:50.779704] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.104 ms 00:28:26.044 [2024-12-15 02:23:50.779713] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.044 [2024-12-15 02:23:50.779755] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.044 [2024-12-15 02:23:50.779764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:28:26.044 [2024-12-15 02:23:50.779774] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:28:26.044 [2024-12-15 02:23:50.779782] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.044 [2024-12-15 02:23:50.779835] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:28:26.044 [2024-12-15 02:23:50.779860] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:28:26.044 [2024-12-15 02:23:50.779899] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:28:26.044 [2024-12-15 02:23:50.779920] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:28:26.044 [2024-12-15 02:23:50.780028] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:28:26.044 [2024-12-15 02:23:50.780041] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:28:26.044 [2024-12-15 02:23:50.780054] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:28:26.044 [2024-12-15 02:23:50.780065] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780076] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780085] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:28:26.044 [2024-12-15 02:23:50.780098] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:28:26.044 [2024-12-15 02:23:50.780106] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:28:26.044 [2024-12-15 02:23:50.780117] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:28:26.044 [2024-12-15 02:23:50.780126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.044 [2024-12-15 02:23:50.780136] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:28:26.044 [2024-12-15 02:23:50.780145] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.294 ms 00:28:26.044 [2024-12-15 02:23:50.780153] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.044 [2024-12-15 02:23:50.780262] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.044 [2024-12-15 02:23:50.780274] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:28:26.044 [2024-12-15 02:23:50.780284] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.095 ms 00:28:26.044 [2024-12-15 02:23:50.780292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.044 [2024-12-15 02:23:50.780394] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:28:26.044 [2024-12-15 02:23:50.780406] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:28:26.044 [2024-12-15 02:23:50.780416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:28:26.044 [2024-12-15 02:23:50.780440] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780449] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780459] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:28:26.044 [2024-12-15 02:23:50.780466] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780474] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:26.044 [2024-12-15 02:23:50.780481] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:28:26.044 [2024-12-15 02:23:50.780492] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:28:26.044 [2024-12-15 02:23:50.780500] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:28:26.044 [2024-12-15 02:23:50.780516] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:28:26.044 [2024-12-15 02:23:50.780526] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:28:26.044 [2024-12-15 02:23:50.780541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:28:26.044 [2024-12-15 02:23:50.780557] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780564] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780571] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:28:26.044 [2024-12-15 02:23:50.780578] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780588] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780596] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:28:26.044 [2024-12-15 02:23:50.780603] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780609] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780616] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:28:26.044 [2024-12-15 02:23:50.780623] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780630] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780637] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:28:26.044 [2024-12-15 02:23:50.780646] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780653] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780659] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:28:26.044 [2024-12-15 02:23:50.780667] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780673] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:26.044 [2024-12-15 02:23:50.780680] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:28:26.044 [2024-12-15 02:23:50.780686] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:28:26.044 [2024-12-15 02:23:50.780692] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:28:26.044 [2024-12-15 02:23:50.780700] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:28:26.044 [2024-12-15 02:23:50.780707] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:28:26.044 [2024-12-15 02:23:50.780714] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780720] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:28:26.044 [2024-12-15 02:23:50.780727] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:28:26.044 [2024-12-15 02:23:50.780733] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780739] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:28:26.044 [2024-12-15 02:23:50.780746] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:28:26.044 [2024-12-15 02:23:50.780754] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:28:26.044 [2024-12-15 02:23:50.780764] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:28:26.044 [2024-12-15 02:23:50.780773] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:28:26.044 [2024-12-15 02:23:50.780779] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:28:26.045 [2024-12-15 02:23:50.780786] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:28:26.045 [2024-12-15 02:23:50.780793] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:28:26.045 [2024-12-15 02:23:50.780799] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:28:26.045 [2024-12-15 02:23:50.780809] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:28:26.045 [2024-12-15 02:23:50.780819] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:28:26.045 [2024-12-15 02:23:50.780828] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:26.045 [2024-12-15 02:23:50.780839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:28:26.045 [2024-12-15 02:23:50.780849] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:28:26.045 [2024-12-15 02:23:50.780857] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:28:26.045 [2024-12-15 02:23:50.780865] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:28:26.045 [2024-12-15 02:23:50.780872] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:28:26.045 [2024-12-15 02:23:50.780880] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:28:26.045 [2024-12-15 02:23:50.780887] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:28:26.045 [2024-12-15 02:23:50.780894] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:28:26.045 [2024-12-15 02:23:50.780903] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:28:26.045 [2024-12-15 02:23:50.780910] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:28:26.045 [2024-12-15 02:23:50.780917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:28:26.045 [2024-12-15 02:23:50.780924] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:28:26.045 [2024-12-15 02:23:50.780931] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:28:26.045 [2024-12-15 02:23:50.780938] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:28:26.045 [2024-12-15 02:23:50.780945] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:28:26.045 [2024-12-15 02:23:50.780955] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:28:26.045 [2024-12-15 02:23:50.780964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:28:26.045 [2024-12-15 02:23:50.780971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:28:26.045 [2024-12-15 02:23:50.780977] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:28:26.045 [2024-12-15 02:23:50.780985] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:28:26.045 [2024-12-15 02:23:50.780992] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.045 [2024-12-15 02:23:50.781001] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:28:26.045 [2024-12-15 02:23:50.781009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.667 ms 00:28:26.045 [2024-12-15 02:23:50.781019] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.307 [2024-12-15 02:23:50.813964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.307 [2024-12-15 02:23:50.814016] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:28:26.307 [2024-12-15 02:23:50.814029] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.900 ms 00:28:26.307 [2024-12-15 02:23:50.814042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.307 [2024-12-15 02:23:50.814132] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.307 [2024-12-15 02:23:50.814142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:28:26.307 [2024-12-15 02:23:50.814151] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:28:26.307 [2024-12-15 02:23:50.814159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.307 [2024-12-15 02:23:50.860285] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.307 [2024-12-15 02:23:50.860341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:28:26.307 [2024-12-15 02:23:50.860355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 46.037 ms 00:28:26.307 [2024-12-15 02:23:50.860364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.307 [2024-12-15 02:23:50.860418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.307 [2024-12-15 02:23:50.860429] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:28:26.307 [2024-12-15 02:23:50.860442] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:28:26.307 [2024-12-15 02:23:50.860450] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.307 [2024-12-15 02:23:50.861045] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.307 [2024-12-15 02:23:50.861084] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:28:26.307 [2024-12-15 02:23:50.861097] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.516 ms 00:28:26.307 [2024-12-15 02:23:50.861105] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.307 [2024-12-15 02:23:50.861298] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.307 [2024-12-15 02:23:50.861310] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:28:26.307 [2024-12-15 02:23:50.861323] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.159 ms 00:28:26.307 [2024-12-15 02:23:50.861331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.307 [2024-12-15 02:23:50.877253] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.307 [2024-12-15 02:23:50.877299] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:28:26.307 [2024-12-15 02:23:50.877311] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.900 ms 00:28:26.307 [2024-12-15 02:23:50.877320] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.307 [2024-12-15 02:23:50.891809] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:28:26.307 [2024-12-15 02:23:50.891858] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:28:26.307 [2024-12-15 02:23:50.891872] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.307 [2024-12-15 02:23:50.891882] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:28:26.308 [2024-12-15 02:23:50.891893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.441 ms 00:28:26.308 [2024-12-15 02:23:50.891901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:50.917804] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:50.917892] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:28:26.308 [2024-12-15 02:23:50.917906] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.845 ms 00:28:26.308 [2024-12-15 02:23:50.917915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:50.931041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:50.931271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:28:26.308 [2024-12-15 02:23:50.931293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.058 ms 00:28:26.308 [2024-12-15 02:23:50.931302] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:50.944260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:50.944308] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:28:26.308 [2024-12-15 02:23:50.944320] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.836 ms 00:28:26.308 [2024-12-15 02:23:50.944327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:50.944982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:50.945012] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:28:26.308 [2024-12-15 02:23:50.945026] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.540 ms 00:28:26.308 [2024-12-15 02:23:50.945035] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.010708] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:51.010986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:28:26.308 [2024-12-15 02:23:51.011021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 65.652 ms 00:28:26.308 [2024-12-15 02:23:51.011031] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.022263] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:28:26.308 [2024-12-15 02:23:51.025446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:51.025640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:28:26.308 [2024-12-15 02:23:51.025662] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.284 ms 00:28:26.308 [2024-12-15 02:23:51.025671] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.025772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:51.025783] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:28:26.308 [2024-12-15 02:23:51.025794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.019 ms 00:28:26.308 [2024-12-15 02:23:51.025807] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.026770] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:51.026820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:28:26.308 [2024-12-15 02:23:51.026833] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.923 ms 00:28:26.308 [2024-12-15 02:23:51.026843] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.026873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:51.026883] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:28:26.308 [2024-12-15 02:23:51.026893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:28:26.308 [2024-12-15 02:23:51.026903] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.026948] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:28:26.308 [2024-12-15 02:23:51.026961] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:51.026970] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:28:26.308 [2024-12-15 02:23:51.026982] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:28:26.308 [2024-12-15 02:23:51.026992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.053548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:51.053600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:28:26.308 [2024-12-15 02:23:51.053619] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.535 ms 00:28:26.308 [2024-12-15 02:23:51.053628] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.053718] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:28:26.308 [2024-12-15 02:23:51.053728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:28:26.308 [2024-12-15 02:23:51.053737] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:28:26.308 [2024-12-15 02:23:51.053746] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:28:26.308 [2024-12-15 02:23:51.055183] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 305.299 ms, result 0 00:28:27.694  [2024-12-15T02:23:53.401Z] Copying: 21/1024 [MB] (21 MBps) [2024-12-15T02:23:54.346Z] Copying: 39/1024 [MB] (18 MBps) [2024-12-15T02:23:55.298Z] Copying: 59/1024 [MB] (19 MBps) [2024-12-15T02:23:56.244Z] Copying: 74/1024 [MB] (15 MBps) [2024-12-15T02:23:57.632Z] Copying: 93/1024 [MB] (19 MBps) [2024-12-15T02:23:58.578Z] Copying: 112/1024 [MB] (19 MBps) [2024-12-15T02:23:59.523Z] Copying: 128/1024 [MB] (15 MBps) [2024-12-15T02:24:00.466Z] Copying: 147/1024 [MB] (19 MBps) [2024-12-15T02:24:01.410Z] Copying: 161/1024 [MB] (14 MBps) [2024-12-15T02:24:02.353Z] Copying: 175/1024 [MB] (14 MBps) [2024-12-15T02:24:03.298Z] Copying: 194/1024 [MB] (19 MBps) [2024-12-15T02:24:04.239Z] Copying: 210/1024 [MB] (15 MBps) [2024-12-15T02:24:05.627Z] Copying: 234/1024 [MB] (23 MBps) [2024-12-15T02:24:06.572Z] Copying: 254/1024 [MB] (20 MBps) [2024-12-15T02:24:07.517Z] Copying: 274/1024 [MB] (19 MBps) [2024-12-15T02:24:08.459Z] Copying: 290/1024 [MB] (16 MBps) [2024-12-15T02:24:09.402Z] Copying: 309/1024 [MB] (18 MBps) [2024-12-15T02:24:10.348Z] Copying: 331/1024 [MB] (22 MBps) [2024-12-15T02:24:11.319Z] Copying: 342/1024 [MB] (10 MBps) [2024-12-15T02:24:12.261Z] Copying: 357/1024 [MB] (15 MBps) [2024-12-15T02:24:13.644Z] Copying: 376/1024 [MB] (19 MBps) [2024-12-15T02:24:14.589Z] Copying: 397/1024 [MB] (20 MBps) [2024-12-15T02:24:15.530Z] Copying: 415/1024 [MB] (18 MBps) [2024-12-15T02:24:16.469Z] Copying: 436/1024 [MB] (20 MBps) [2024-12-15T02:24:17.411Z] Copying: 449/1024 [MB] (13 MBps) [2024-12-15T02:24:18.356Z] Copying: 466/1024 [MB] (17 MBps) [2024-12-15T02:24:19.301Z] Copying: 489/1024 [MB] (23 MBps) [2024-12-15T02:24:20.245Z] Copying: 505/1024 [MB] (15 MBps) [2024-12-15T02:24:21.632Z] Copying: 523/1024 [MB] (18 MBps) [2024-12-15T02:24:22.577Z] Copying: 540/1024 [MB] (16 MBps) [2024-12-15T02:24:23.520Z] Copying: 555/1024 [MB] (15 MBps) [2024-12-15T02:24:24.464Z] Copying: 575/1024 [MB] (20 MBps) [2024-12-15T02:24:25.407Z] Copying: 599/1024 [MB] (24 MBps) [2024-12-15T02:24:26.351Z] Copying: 616/1024 [MB] (16 MBps) [2024-12-15T02:24:27.296Z] Copying: 629/1024 [MB] (12 MBps) [2024-12-15T02:24:28.241Z] Copying: 639/1024 [MB] (10 MBps) [2024-12-15T02:24:29.628Z] Copying: 649/1024 [MB] (10 MBps) [2024-12-15T02:24:30.570Z] Copying: 660/1024 [MB] (10 MBps) [2024-12-15T02:24:31.513Z] Copying: 682/1024 [MB] (21 MBps) [2024-12-15T02:24:32.458Z] Copying: 692/1024 [MB] (10 MBps) [2024-12-15T02:24:33.403Z] Copying: 703/1024 [MB] (10 MBps) [2024-12-15T02:24:34.406Z] Copying: 714/1024 [MB] (11 MBps) [2024-12-15T02:24:35.350Z] Copying: 725/1024 [MB] (10 MBps) [2024-12-15T02:24:36.291Z] Copying: 735/1024 [MB] (10 MBps) [2024-12-15T02:24:37.675Z] Copying: 748/1024 [MB] (13 MBps) [2024-12-15T02:24:38.248Z] Copying: 770/1024 [MB] (21 MBps) [2024-12-15T02:24:39.633Z] Copying: 791/1024 [MB] (21 MBps) [2024-12-15T02:24:40.576Z] Copying: 807/1024 [MB] (15 MBps) [2024-12-15T02:24:41.518Z] Copying: 817/1024 [MB] (10 MBps) [2024-12-15T02:24:42.463Z] Copying: 827/1024 [MB] (10 MBps) [2024-12-15T02:24:43.406Z] Copying: 843/1024 [MB] (15 MBps) [2024-12-15T02:24:44.350Z] Copying: 856/1024 [MB] (12 MBps) [2024-12-15T02:24:45.300Z] Copying: 869/1024 [MB] (13 MBps) [2024-12-15T02:24:46.247Z] Copying: 879/1024 [MB] (10 MBps) [2024-12-15T02:24:47.634Z] Copying: 892/1024 [MB] (12 MBps) [2024-12-15T02:24:48.580Z] Copying: 911/1024 [MB] (19 MBps) [2024-12-15T02:24:49.524Z] Copying: 930/1024 [MB] (18 MBps) [2024-12-15T02:24:50.469Z] Copying: 951/1024 [MB] (20 MBps) [2024-12-15T02:24:51.412Z] Copying: 969/1024 [MB] (18 MBps) [2024-12-15T02:24:52.357Z] Copying: 986/1024 [MB] (16 MBps) [2024-12-15T02:24:53.301Z] Copying: 1001/1024 [MB] (15 MBps) [2024-12-15T02:24:53.874Z] Copying: 1012/1024 [MB] (10 MBps) [2024-12-15T02:24:54.133Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-15 02:24:54.119346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.368 [2024-12-15 02:24:54.119434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:29:29.368 [2024-12-15 02:24:54.119452] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:29:29.368 [2024-12-15 02:24:54.119462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.368 [2024-12-15 02:24:54.119488] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:29:29.368 [2024-12-15 02:24:54.122608] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.368 [2024-12-15 02:24:54.122665] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:29:29.368 [2024-12-15 02:24:54.122678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.103 ms 00:29:29.368 [2024-12-15 02:24:54.122687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.368 [2024-12-15 02:24:54.122942] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.368 [2024-12-15 02:24:54.122956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:29:29.368 [2024-12-15 02:24:54.122966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.226 ms 00:29:29.368 [2024-12-15 02:24:54.122976] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.368 [2024-12-15 02:24:54.126776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.368 [2024-12-15 02:24:54.126805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:29:29.368 [2024-12-15 02:24:54.126817] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.783 ms 00:29:29.368 [2024-12-15 02:24:54.126832] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.133753] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.631 [2024-12-15 02:24:54.133977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:29:29.631 [2024-12-15 02:24:54.134000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.900 ms 00:29:29.631 [2024-12-15 02:24:54.134010] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.161596] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.631 [2024-12-15 02:24:54.161788] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:29:29.631 [2024-12-15 02:24:54.161811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.516 ms 00:29:29.631 [2024-12-15 02:24:54.161821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.183100] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.631 [2024-12-15 02:24:54.183163] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:29:29.631 [2024-12-15 02:24:54.183180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 21.016 ms 00:29:29.631 [2024-12-15 02:24:54.183190] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.188489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.631 [2024-12-15 02:24:54.188688] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:29:29.631 [2024-12-15 02:24:54.188711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.214 ms 00:29:29.631 [2024-12-15 02:24:54.188733] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.215762] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.631 [2024-12-15 02:24:54.215962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:29:29.631 [2024-12-15 02:24:54.215984] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.005 ms 00:29:29.631 [2024-12-15 02:24:54.215992] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.242984] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.631 [2024-12-15 02:24:54.243156] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:29:29.631 [2024-12-15 02:24:54.243269] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.623 ms 00:29:29.631 [2024-12-15 02:24:54.243297] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.268685] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.631 [2024-12-15 02:24:54.268880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:29:29.631 [2024-12-15 02:24:54.269309] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.332 ms 00:29:29.631 [2024-12-15 02:24:54.269366] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.294838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.631 [2024-12-15 02:24:54.295029] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:29:29.631 [2024-12-15 02:24:54.295093] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.300 ms 00:29:29.631 [2024-12-15 02:24:54.295116] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.631 [2024-12-15 02:24:54.295264] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:29:29.631 [2024-12-15 02:24:54.295349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:29:29.631 [2024-12-15 02:24:54.295386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 1536 / 261120 wr_cnt: 1 state: open 00:29:29.631 [2024-12-15 02:24:54.295416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295446] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295584] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295613] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295643] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295761] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295791] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295848] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:29:29.631 [2024-12-15 02:24:54.295907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.295975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296173] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296415] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296443] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296501] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296530] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296558] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296587] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296615] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296692] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296724] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296754] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296872] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296901] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296929] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296958] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.296987] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297044] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297121] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297151] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297265] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297294] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297323] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297413] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297489] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297551] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297580] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297609] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.297962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298474] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298523] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298539] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298547] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298555] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298586] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298593] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298600] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298619] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298627] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:29:29.632 [2024-12-15 02:24:54.298635] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:29:29.633 [2024-12-15 02:24:54.298642] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:29:29.633 [2024-12-15 02:24:54.298650] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:29:29.633 [2024-12-15 02:24:54.298658] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:29:29.633 [2024-12-15 02:24:54.298665] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:29:29.633 [2024-12-15 02:24:54.298673] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:29:29.633 [2024-12-15 02:24:54.298680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:29:29.633 [2024-12-15 02:24:54.298689] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:29:29.633 [2024-12-15 02:24:54.298708] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:29:29.633 [2024-12-15 02:24:54.298717] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 0c183996-2205-4a0b-bdc0-38705690ad6f 00:29:29.633 [2024-12-15 02:24:54.298726] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 262656 00:29:29.633 [2024-12-15 02:24:54.298733] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:29:29.633 [2024-12-15 02:24:54.298741] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:29:29.633 [2024-12-15 02:24:54.298749] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:29:29.633 [2024-12-15 02:24:54.298769] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:29:29.633 [2024-12-15 02:24:54.298777] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:29:29.633 [2024-12-15 02:24:54.298785] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:29:29.633 [2024-12-15 02:24:54.298792] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:29:29.633 [2024-12-15 02:24:54.298799] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:29:29.633 [2024-12-15 02:24:54.298809] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.633 [2024-12-15 02:24:54.298818] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:29:29.633 [2024-12-15 02:24:54.298829] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.549 ms 00:29:29.633 [2024-12-15 02:24:54.298840] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.633 [2024-12-15 02:24:54.312879] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.633 [2024-12-15 02:24:54.313090] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:29:29.633 [2024-12-15 02:24:54.313113] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.981 ms 00:29:29.633 [2024-12-15 02:24:54.313122] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.633 [2024-12-15 02:24:54.313546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:29:29.633 [2024-12-15 02:24:54.313573] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:29:29.633 [2024-12-15 02:24:54.313584] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.397 ms 00:29:29.633 [2024-12-15 02:24:54.313593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.633 [2024-12-15 02:24:54.350559] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.633 [2024-12-15 02:24:54.350613] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:29:29.633 [2024-12-15 02:24:54.350626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.633 [2024-12-15 02:24:54.350635] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.633 [2024-12-15 02:24:54.350704] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.633 [2024-12-15 02:24:54.350718] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:29:29.633 [2024-12-15 02:24:54.350728] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.633 [2024-12-15 02:24:54.350738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.633 [2024-12-15 02:24:54.350828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.633 [2024-12-15 02:24:54.350840] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:29:29.633 [2024-12-15 02:24:54.350850] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.633 [2024-12-15 02:24:54.350859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.633 [2024-12-15 02:24:54.350876] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.633 [2024-12-15 02:24:54.350885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:29:29.633 [2024-12-15 02:24:54.350898] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.633 [2024-12-15 02:24:54.350906] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.437141] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.895 [2024-12-15 02:24:54.437231] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:29:29.895 [2024-12-15 02:24:54.437247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.895 [2024-12-15 02:24:54.437256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.507764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.895 [2024-12-15 02:24:54.507832] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:29:29.895 [2024-12-15 02:24:54.507844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.895 [2024-12-15 02:24:54.507853] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.507915] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.895 [2024-12-15 02:24:54.507925] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:29:29.895 [2024-12-15 02:24:54.507935] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.895 [2024-12-15 02:24:54.507943] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.508007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.895 [2024-12-15 02:24:54.508018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:29:29.895 [2024-12-15 02:24:54.508027] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.895 [2024-12-15 02:24:54.508038] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.508135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.895 [2024-12-15 02:24:54.508146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:29:29.895 [2024-12-15 02:24:54.508155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.895 [2024-12-15 02:24:54.508164] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.508234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.895 [2024-12-15 02:24:54.508246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:29:29.895 [2024-12-15 02:24:54.508255] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.895 [2024-12-15 02:24:54.508263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.508310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.895 [2024-12-15 02:24:54.508320] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:29:29.895 [2024-12-15 02:24:54.508329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.895 [2024-12-15 02:24:54.508337] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.508387] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:29:29.895 [2024-12-15 02:24:54.508399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:29:29.895 [2024-12-15 02:24:54.508408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:29:29.895 [2024-12-15 02:24:54.508419] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:29:29.895 [2024-12-15 02:24:54.508560] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 389.186 ms, result 0 00:29:30.847 00:29:30.847 00:29:30.847 02:24:55 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@96 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:32.861 /home/vagrant/spdk_repo/spdk/test/ftl/testfile2: OK 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@98 -- # trap - SIGINT SIGTERM EXIT 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@99 -- # restore_kill 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@31 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@32 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@33 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@34 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@35 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile2.md5 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@37 -- # killprocess 81939 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@954 -- # '[' -z 81939 ']' 00:29:32.861 Process with pid 81939 is not found 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@958 -- # kill -0 81939 00:29:32.861 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (81939) - No such process 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@981 -- # echo 'Process with pid 81939 is not found' 00:29:32.861 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@38 -- # rmmod nbd 00:29:33.433 Remove shared memory files 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- ftl/dirty_shutdown.sh@39 -- # remove_shm 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@207 -- # rm -f rm -f 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:29:33.433 ************************************ 00:29:33.433 END TEST ftl_dirty_shutdown 00:29:33.433 ************************************ 00:29:33.433 00:29:33.433 real 4m21.562s 00:29:33.433 user 4m35.499s 00:29:33.433 sys 0m24.660s 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:29:33.433 02:24:57 ftl.ftl_dirty_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.433 02:24:57 ftl -- ftl/ftl.sh@78 -- # run_test ftl_upgrade_shutdown /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:33.433 02:24:57 ftl -- common/autotest_common.sh@1105 -- # '[' 4 -le 1 ']' 00:29:33.433 02:24:57 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:29:33.433 02:24:57 ftl -- common/autotest_common.sh@10 -- # set +x 00:29:33.433 ************************************ 00:29:33.433 START TEST ftl_upgrade_shutdown 00:29:33.433 ************************************ 00:29:33.433 02:24:57 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 0000:00:11.0 0000:00:10.0 00:29:33.433 * Looking for test storage... 00:29:33.433 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lcov --version 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@333 -- # local ver1 ver1_l 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@334 -- # local ver2 ver2_l 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # IFS=.-: 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@336 -- # read -ra ver1 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # IFS=.-: 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@337 -- # read -ra ver2 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@338 -- # local 'op=<' 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@340 -- # ver1_l=2 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@341 -- # ver2_l=1 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@344 -- # case "$op" in 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@345 -- # : 1 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v = 0 )) 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # decimal 1 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=1 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 1 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@365 -- # ver1[v]=1 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # decimal 2 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@353 -- # local d=2 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@355 -- # echo 2 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@366 -- # ver2[v]=2 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- scripts/common.sh@368 -- # return 0 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:29:33.433 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:29:33.433 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.434 --rc genhtml_branch_coverage=1 00:29:33.434 --rc genhtml_function_coverage=1 00:29:33.434 --rc genhtml_legend=1 00:29:33.434 --rc geninfo_all_blocks=1 00:29:33.434 --rc geninfo_unexecuted_blocks=1 00:29:33.434 00:29:33.434 ' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:29:33.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.434 --rc genhtml_branch_coverage=1 00:29:33.434 --rc genhtml_function_coverage=1 00:29:33.434 --rc genhtml_legend=1 00:29:33.434 --rc geninfo_all_blocks=1 00:29:33.434 --rc geninfo_unexecuted_blocks=1 00:29:33.434 00:29:33.434 ' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:29:33.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.434 --rc genhtml_branch_coverage=1 00:29:33.434 --rc genhtml_function_coverage=1 00:29:33.434 --rc genhtml_legend=1 00:29:33.434 --rc geninfo_all_blocks=1 00:29:33.434 --rc geninfo_unexecuted_blocks=1 00:29:33.434 00:29:33.434 ' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:29:33.434 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:29:33.434 --rc genhtml_branch_coverage=1 00:29:33.434 --rc genhtml_function_coverage=1 00:29:33.434 --rc genhtml_legend=1 00:29:33.434 --rc geninfo_all_blocks=1 00:29:33.434 --rc geninfo_unexecuted_blocks=1 00:29:33.434 00:29:33.434 ' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@8 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/upgrade_shutdown.sh 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@23 -- # spdk_ini_pid= 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@17 -- # trap 'cleanup; exit 1' SIGINT SIGTERM EXIT 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # export FTL_BDEV=ftl 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@19 -- # FTL_BDEV=ftl 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # export FTL_BASE=0000:00:11.0 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@20 -- # FTL_BASE=0000:00:11.0 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # export FTL_BASE_SIZE=20480 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@21 -- # FTL_BASE_SIZE=20480 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # export FTL_CACHE=0000:00:10.0 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@22 -- # FTL_CACHE=0000:00:10.0 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # export FTL_CACHE_SIZE=5120 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@23 -- # FTL_CACHE_SIZE=5120 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # export FTL_L2P_DRAM_LIMIT=2 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@24 -- # FTL_L2P_DRAM_LIMIT=2 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@26 -- # tcp_target_setup 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=84729 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 84729 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84729 ']' 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:29:33.434 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:33.434 02:24:58 ftl.ftl_upgrade_shutdown -- ftl/common.sh@87 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' 00:29:33.695 [2024-12-15 02:24:58.278180] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:33.695 [2024-12-15 02:24:58.278533] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84729 ] 00:29:33.695 [2024-12-15 02:24:58.438756] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:33.956 [2024-12-15 02:24:58.558590] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # params=('FTL_BDEV' 'FTL_BASE' 'FTL_BASE_SIZE' 'FTL_CACHE' 'FTL_CACHE_SIZE' 'FTL_L2P_DRAM_LIMIT') 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@99 -- # local params 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z ftl ]] 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:11.0 ]] 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 20480 ]] 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 0000:00:10.0 ]] 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 5120 ]] 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@100 -- # for param in "${params[@]}" 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@101 -- # [[ -z 2 ]] 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # create_base_bdev base 0000:00:11.0 20480 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@54 -- # local name=base 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@56 -- # local size=20480 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@59 -- # local base_bdev 00:29:34.528 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b base -t PCIe -a 0000:00:11.0 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@60 -- # base_bdev=basen1 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@62 -- # local base_size 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # get_bdev_size basen1 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=basen1 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b basen1 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:35.101 { 00:29:35.101 "name": "basen1", 00:29:35.101 "aliases": [ 00:29:35.101 "248cd3f5-8c08-42a8-ab98-e6745ae00216" 00:29:35.101 ], 00:29:35.101 "product_name": "NVMe disk", 00:29:35.101 "block_size": 4096, 00:29:35.101 "num_blocks": 1310720, 00:29:35.101 "uuid": "248cd3f5-8c08-42a8-ab98-e6745ae00216", 00:29:35.101 "numa_id": -1, 00:29:35.101 "assigned_rate_limits": { 00:29:35.101 "rw_ios_per_sec": 0, 00:29:35.101 "rw_mbytes_per_sec": 0, 00:29:35.101 "r_mbytes_per_sec": 0, 00:29:35.101 "w_mbytes_per_sec": 0 00:29:35.101 }, 00:29:35.101 "claimed": true, 00:29:35.101 "claim_type": "read_many_write_one", 00:29:35.101 "zoned": false, 00:29:35.101 "supported_io_types": { 00:29:35.101 "read": true, 00:29:35.101 "write": true, 00:29:35.101 "unmap": true, 00:29:35.101 "flush": true, 00:29:35.101 "reset": true, 00:29:35.101 "nvme_admin": true, 00:29:35.101 "nvme_io": true, 00:29:35.101 "nvme_io_md": false, 00:29:35.101 "write_zeroes": true, 00:29:35.101 "zcopy": false, 00:29:35.101 "get_zone_info": false, 00:29:35.101 "zone_management": false, 00:29:35.101 "zone_append": false, 00:29:35.101 "compare": true, 00:29:35.101 "compare_and_write": false, 00:29:35.101 "abort": true, 00:29:35.101 "seek_hole": false, 00:29:35.101 "seek_data": false, 00:29:35.101 "copy": true, 00:29:35.101 "nvme_iov_md": false 00:29:35.101 }, 00:29:35.101 "driver_specific": { 00:29:35.101 "nvme": [ 00:29:35.101 { 00:29:35.101 "pci_address": "0000:00:11.0", 00:29:35.101 "trid": { 00:29:35.101 "trtype": "PCIe", 00:29:35.101 "traddr": "0000:00:11.0" 00:29:35.101 }, 00:29:35.101 "ctrlr_data": { 00:29:35.101 "cntlid": 0, 00:29:35.101 "vendor_id": "0x1b36", 00:29:35.101 "model_number": "QEMU NVMe Ctrl", 00:29:35.101 "serial_number": "12341", 00:29:35.101 "firmware_revision": "8.0.0", 00:29:35.101 "subnqn": "nqn.2019-08.org.qemu:12341", 00:29:35.101 "oacs": { 00:29:35.101 "security": 0, 00:29:35.101 "format": 1, 00:29:35.101 "firmware": 0, 00:29:35.101 "ns_manage": 1 00:29:35.101 }, 00:29:35.101 "multi_ctrlr": false, 00:29:35.101 "ana_reporting": false 00:29:35.101 }, 00:29:35.101 "vs": { 00:29:35.101 "nvme_version": "1.4" 00:29:35.101 }, 00:29:35.101 "ns_data": { 00:29:35.101 "id": 1, 00:29:35.101 "can_share": false 00:29:35.101 } 00:29:35.101 } 00:29:35.101 ], 00:29:35.101 "mp_policy": "active_passive" 00:29:35.101 } 00:29:35.101 } 00:29:35.101 ]' 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=1310720 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 5120 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@63 -- # base_size=5120 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@64 -- # [[ 20480 -le 5120 ]] 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@67 -- # clear_lvols 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:29:35.101 02:24:59 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:29:35.362 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@28 -- # stores=f4c8214f-0e78-43ad-9028-4c88d99cdace 00:29:35.362 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@29 -- # for lvs in $stores 00:29:35.362 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u f4c8214f-0e78-43ad-9028-4c88d99cdace 00:29:35.622 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore basen1 lvs 00:29:35.882 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@68 -- # lvs=8746e643-1576-460d-8e8d-19e56ee9f866 00:29:35.882 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create basen1p0 20480 -t -u 8746e643-1576-460d-8e8d-19e56ee9f866 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@107 -- # base_bdev=5661145e-4d0f-4e7f-98a1-9fc89f4967cc 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@108 -- # [[ -z 5661145e-4d0f-4e7f-98a1-9fc89f4967cc ]] 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # create_nv_cache_bdev cache 0000:00:10.0 5661145e-4d0f-4e7f-98a1-9fc89f4967cc 5120 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@35 -- # local name=cache 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@37 -- # local base_bdev=5661145e-4d0f-4e7f-98a1-9fc89f4967cc 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@38 -- # local cache_size=5120 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # get_bdev_size 5661145e-4d0f-4e7f-98a1-9fc89f4967cc 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1382 -- # local bdev_name=5661145e-4d0f-4e7f-98a1-9fc89f4967cc 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1383 -- # local bdev_info 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1384 -- # local bs 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1385 -- # local nb 00:29:36.141 02:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b 5661145e-4d0f-4e7f-98a1-9fc89f4967cc 00:29:36.399 02:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:29:36.399 { 00:29:36.399 "name": "5661145e-4d0f-4e7f-98a1-9fc89f4967cc", 00:29:36.399 "aliases": [ 00:29:36.399 "lvs/basen1p0" 00:29:36.399 ], 00:29:36.399 "product_name": "Logical Volume", 00:29:36.399 "block_size": 4096, 00:29:36.399 "num_blocks": 5242880, 00:29:36.399 "uuid": "5661145e-4d0f-4e7f-98a1-9fc89f4967cc", 00:29:36.399 "assigned_rate_limits": { 00:29:36.399 "rw_ios_per_sec": 0, 00:29:36.399 "rw_mbytes_per_sec": 0, 00:29:36.399 "r_mbytes_per_sec": 0, 00:29:36.399 "w_mbytes_per_sec": 0 00:29:36.399 }, 00:29:36.399 "claimed": false, 00:29:36.399 "zoned": false, 00:29:36.399 "supported_io_types": { 00:29:36.399 "read": true, 00:29:36.399 "write": true, 00:29:36.399 "unmap": true, 00:29:36.399 "flush": false, 00:29:36.399 "reset": true, 00:29:36.399 "nvme_admin": false, 00:29:36.399 "nvme_io": false, 00:29:36.399 "nvme_io_md": false, 00:29:36.399 "write_zeroes": true, 00:29:36.399 "zcopy": false, 00:29:36.399 "get_zone_info": false, 00:29:36.399 "zone_management": false, 00:29:36.399 "zone_append": false, 00:29:36.399 "compare": false, 00:29:36.399 "compare_and_write": false, 00:29:36.399 "abort": false, 00:29:36.399 "seek_hole": true, 00:29:36.399 "seek_data": true, 00:29:36.399 "copy": false, 00:29:36.399 "nvme_iov_md": false 00:29:36.399 }, 00:29:36.399 "driver_specific": { 00:29:36.399 "lvol": { 00:29:36.399 "lvol_store_uuid": "8746e643-1576-460d-8e8d-19e56ee9f866", 00:29:36.399 "base_bdev": "basen1", 00:29:36.399 "thin_provision": true, 00:29:36.399 "num_allocated_clusters": 0, 00:29:36.399 "snapshot": false, 00:29:36.399 "clone": false, 00:29:36.399 "esnap_clone": false 00:29:36.399 } 00:29:36.399 } 00:29:36.399 } 00:29:36.399 ]' 00:29:36.399 02:25:00 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:29:36.399 02:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1387 -- # bs=4096 00:29:36.399 02:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:29:36.399 02:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1388 -- # nb=5242880 00:29:36.399 02:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1391 -- # bdev_size=20480 00:29:36.399 02:25:01 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1392 -- # echo 20480 00:29:36.399 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@41 -- # local base_size=1024 00:29:36.399 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@44 -- # local nvc_bdev 00:29:36.399 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b cache -t PCIe -a 0000:00:10.0 00:29:36.656 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@45 -- # nvc_bdev=cachen1 00:29:36.656 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@47 -- # [[ -z 5120 ]] 00:29:36.656 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create cachen1 -s 5120 1 00:29:36.913 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@113 -- # cache_bdev=cachen1p0 00:29:36.913 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@114 -- # [[ -z cachen1p0 ]] 00:29:36.913 02:25:01 ftl.ftl_upgrade_shutdown -- ftl/common.sh@119 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 60 bdev_ftl_create -b ftl -d 5661145e-4d0f-4e7f-98a1-9fc89f4967cc -c cachen1p0 --l2p_dram_limit 2 00:29:37.173 [2024-12-15 02:25:01.703134] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.173 [2024-12-15 02:25:01.703175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:29:37.173 [2024-12-15 02:25:01.703188] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:37.173 [2024-12-15 02:25:01.703204] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.173 [2024-12-15 02:25:01.703252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.173 [2024-12-15 02:25:01.703260] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:29:37.173 [2024-12-15 02:25:01.703267] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:29:37.173 [2024-12-15 02:25:01.703273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.173 [2024-12-15 02:25:01.703290] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:29:37.173 [2024-12-15 02:25:01.703835] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:29:37.173 [2024-12-15 02:25:01.703850] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.173 [2024-12-15 02:25:01.703856] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:29:37.173 [2024-12-15 02:25:01.703865] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.562 ms 00:29:37.173 [2024-12-15 02:25:01.703871] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.173 [2024-12-15 02:25:01.703895] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl] Create new FTL, UUID f0c11f49-e1c8-4c0f-9224-6a2fd4d8c2d2 00:29:37.174 [2024-12-15 02:25:01.704877] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.174 [2024-12-15 02:25:01.704895] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Default-initialize superblock 00:29:37.174 [2024-12-15 02:25:01.704903] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:29:37.174 [2024-12-15 02:25:01.704910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.174 [2024-12-15 02:25:01.709891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.174 [2024-12-15 02:25:01.710002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:29:37.174 [2024-12-15 02:25:01.710054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.927 ms 00:29:37.174 [2024-12-15 02:25:01.710074] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.174 [2024-12-15 02:25:01.710205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.174 [2024-12-15 02:25:01.710233] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:29:37.174 [2024-12-15 02:25:01.710290] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.025 ms 00:29:37.174 [2024-12-15 02:25:01.710312] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.174 [2024-12-15 02:25:01.710364] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.174 [2024-12-15 02:25:01.710405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:29:37.174 [2024-12-15 02:25:01.710421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:29:37.174 [2024-12-15 02:25:01.710474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.174 [2024-12-15 02:25:01.710505] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:29:37.174 [2024-12-15 02:25:01.713450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.174 [2024-12-15 02:25:01.713537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:29:37.174 [2024-12-15 02:25:01.713593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.949 ms 00:29:37.174 [2024-12-15 02:25:01.713612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.174 [2024-12-15 02:25:01.713728] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.174 [2024-12-15 02:25:01.713751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:29:37.174 [2024-12-15 02:25:01.713801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:29:37.174 [2024-12-15 02:25:01.713819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.174 [2024-12-15 02:25:01.713863] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 1 00:29:37.174 [2024-12-15 02:25:01.714019] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:29:37.174 [2024-12-15 02:25:01.714078] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:29:37.174 [2024-12-15 02:25:01.714127] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:29:37.174 [2024-12-15 02:25:01.714155] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:29:37.174 [2024-12-15 02:25:01.714221] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:29:37.174 [2024-12-15 02:25:01.714248] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:29:37.174 [2024-12-15 02:25:01.714263] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:29:37.174 [2024-12-15 02:25:01.714282] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:29:37.174 [2024-12-15 02:25:01.714296] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:29:37.174 [2024-12-15 02:25:01.714345] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.174 [2024-12-15 02:25:01.714362] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:29:37.174 [2024-12-15 02:25:01.714378] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.484 ms 00:29:37.174 [2024-12-15 02:25:01.714393] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.174 [2024-12-15 02:25:01.714471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.174 [2024-12-15 02:25:01.714496] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:29:37.174 [2024-12-15 02:25:01.714525] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:29:37.174 [2024-12-15 02:25:01.714540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.174 [2024-12-15 02:25:01.714642] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:29:37.174 [2024-12-15 02:25:01.714666] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:29:37.174 [2024-12-15 02:25:01.714683] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:37.174 [2024-12-15 02:25:01.714698] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.714744] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:29:37.174 [2024-12-15 02:25:01.714783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.714820] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:29:37.174 [2024-12-15 02:25:01.714838] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:29:37.174 [2024-12-15 02:25:01.714854] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:29:37.174 [2024-12-15 02:25:01.714868] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.714883] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:29:37.174 [2024-12-15 02:25:01.714932] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:29:37.174 [2024-12-15 02:25:01.714952] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.714966] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:29:37.174 [2024-12-15 02:25:01.714981] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:29:37.174 [2024-12-15 02:25:01.714996] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.715013] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:29:37.174 [2024-12-15 02:25:01.715053] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:29:37.174 [2024-12-15 02:25:01.715071] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.715086] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:29:37.174 [2024-12-15 02:25:01.715101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:29:37.174 [2024-12-15 02:25:01.715116] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:37.174 [2024-12-15 02:25:01.715131] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:29:37.174 [2024-12-15 02:25:01.715145] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:29:37.174 [2024-12-15 02:25:01.715184] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:37.174 [2024-12-15 02:25:01.715210] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:29:37.174 [2024-12-15 02:25:01.715226] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:29:37.174 [2024-12-15 02:25:01.715240] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:37.174 [2024-12-15 02:25:01.715256] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:29:37.174 [2024-12-15 02:25:01.715270] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:29:37.174 [2024-12-15 02:25:01.715285] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:29:37.174 [2024-12-15 02:25:01.715327] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:29:37.174 [2024-12-15 02:25:01.715348] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:29:37.174 [2024-12-15 02:25:01.715362] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.715378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:29:37.174 [2024-12-15 02:25:01.715392] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:29:37.174 [2024-12-15 02:25:01.715407] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.715421] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:29:37.174 [2024-12-15 02:25:01.715437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:29:37.174 [2024-12-15 02:25:01.715465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.715480] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:29:37.174 [2024-12-15 02:25:01.715494] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:29:37.174 [2024-12-15 02:25:01.715509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.715522] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:29:37.174 [2024-12-15 02:25:01.715538] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:29:37.174 [2024-12-15 02:25:01.715572] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:29:37.174 [2024-12-15 02:25:01.715591] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:29:37.174 [2024-12-15 02:25:01.715621] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:29:37.174 [2024-12-15 02:25:01.715656] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:29:37.175 [2024-12-15 02:25:01.715672] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:29:37.175 [2024-12-15 02:25:01.715688] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:29:37.175 [2024-12-15 02:25:01.715703] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:29:37.175 [2024-12-15 02:25:01.715718] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:29:37.175 [2024-12-15 02:25:01.715771] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:29:37.175 [2024-12-15 02:25:01.715813] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.715839] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:29:37.175 [2024-12-15 02:25:01.715862] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.715917] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.715944] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:29:37.175 [2024-12-15 02:25:01.715967] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:29:37.175 [2024-12-15 02:25:01.715990] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:29:37.175 [2024-12-15 02:25:01.716036] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:29:37.175 [2024-12-15 02:25:01.716061] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.716083] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.716109] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.716130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.716204] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.716229] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.716238] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:29:37.175 [2024-12-15 02:25:01.716243] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:29:37.175 [2024-12-15 02:25:01.716252] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.716258] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:29:37.175 [2024-12-15 02:25:01.716265] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:29:37.175 [2024-12-15 02:25:01.716271] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:29:37.175 [2024-12-15 02:25:01.716278] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:29:37.175 [2024-12-15 02:25:01.716284] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:37.175 [2024-12-15 02:25:01.716292] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:29:37.175 [2024-12-15 02:25:01.716298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.697 ms 00:29:37.175 [2024-12-15 02:25:01.716305] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:37.175 [2024-12-15 02:25:01.716351] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:29:37.175 [2024-12-15 02:25:01.716362] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:29:41.374 [2024-12-15 02:25:05.488035] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.488121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:29:41.374 [2024-12-15 02:25:05.488139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3771.668 ms 00:29:41.374 [2024-12-15 02:25:05.488151] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.520112] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.520359] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:29:41.374 [2024-12-15 02:25:05.520383] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.683 ms 00:29:41.374 [2024-12-15 02:25:05.520395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.520492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.520509] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:29:41.374 [2024-12-15 02:25:05.520518] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:29:41.374 [2024-12-15 02:25:05.520535] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.556167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.556386] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:29:41.374 [2024-12-15 02:25:05.556408] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.580 ms 00:29:41.374 [2024-12-15 02:25:05.556420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.556459] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.556479] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:29:41.374 [2024-12-15 02:25:05.556488] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:29:41.374 [2024-12-15 02:25:05.556498] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.557026] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.557053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:29:41.374 [2024-12-15 02:25:05.557072] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.471 ms 00:29:41.374 [2024-12-15 02:25:05.557085] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.557130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.557144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:29:41.374 [2024-12-15 02:25:05.557155] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:29:41.374 [2024-12-15 02:25:05.557167] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.574636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.574685] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:29:41.374 [2024-12-15 02:25:05.574697] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.449 ms 00:29:41.374 [2024-12-15 02:25:05.574707] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.598995] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:29:41.374 [2024-12-15 02:25:05.600515] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.600741] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:29:41.374 [2024-12-15 02:25:05.600775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.716 ms 00:29:41.374 [2024-12-15 02:25:05.600788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.631028] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.631078] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear L2P 00:29:41.374 [2024-12-15 02:25:05.631095] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 30.182 ms 00:29:41.374 [2024-12-15 02:25:05.631103] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.631227] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.631242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:29:41.374 [2024-12-15 02:25:05.631259] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.071 ms 00:29:41.374 [2024-12-15 02:25:05.631267] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.656679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.656726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial band info metadata 00:29:41.374 [2024-12-15 02:25:05.656742] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.351 ms 00:29:41.374 [2024-12-15 02:25:05.656751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.682145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.682192] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Save initial chunk info metadata 00:29:41.374 [2024-12-15 02:25:05.682221] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 25.336 ms 00:29:41.374 [2024-12-15 02:25:05.682231] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.682867] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.682887] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:29:41.374 [2024-12-15 02:25:05.682900] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.587 ms 00:29:41.374 [2024-12-15 02:25:05.682910] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.766611] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.766659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Wipe P2L region 00:29:41.374 [2024-12-15 02:25:05.766679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 83.636 ms 00:29:41.374 [2024-12-15 02:25:05.766687] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.794552] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.794601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim map 00:29:41.374 [2024-12-15 02:25:05.794617] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 27.766 ms 00:29:41.374 [2024-12-15 02:25:05.794626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.820823] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.820868] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Clear trim log 00:29:41.374 [2024-12-15 02:25:05.820883] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.141 ms 00:29:41.374 [2024-12-15 02:25:05.820891] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.847097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.847142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:29:41.374 [2024-12-15 02:25:05.847156] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 26.152 ms 00:29:41.374 [2024-12-15 02:25:05.847165] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.847234] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.847246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:29:41.374 [2024-12-15 02:25:05.847261] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:29:41.374 [2024-12-15 02:25:05.847269] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.847360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:29:41.374 [2024-12-15 02:25:05.847376] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:29:41.374 [2024-12-15 02:25:05.847388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.036 ms 00:29:41.374 [2024-12-15 02:25:05.847397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:29:41.374 [2024-12-15 02:25:05.849094] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4145.437 ms, result 0 00:29:41.374 { 00:29:41.374 "name": "ftl", 00:29:41.374 "uuid": "f0c11f49-e1c8-4c0f-9224-6a2fd4d8c2d2" 00:29:41.374 } 00:29:41.374 02:25:05 ftl.ftl_upgrade_shutdown -- ftl/common.sh@121 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_transport --trtype TCP 00:29:41.375 [2024-12-15 02:25:06.075686] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:29:41.375 02:25:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@122 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_create_subsystem nqn.2018-09.io.spdk:cnode0 -a -m 1 00:29:41.634 02:25:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@123 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_ns nqn.2018-09.io.spdk:cnode0 ftl 00:29:41.893 [2024-12-15 02:25:06.504178] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:29:41.893 02:25:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@124 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py nvmf_subsystem_add_listener nqn.2018-09.io.spdk:cnode0 -t TCP -f ipv4 -s 4420 -a 127.0.0.1 00:29:42.152 [2024-12-15 02:25:06.673420] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:29:42.152 02:25:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@126 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_config 00:29:42.412 Fill FTL, iteration 1 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@28 -- # size=1073741824 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@29 -- # seek=0 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@30 -- # skip=0 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@31 -- # bs=1048576 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@32 -- # count=1024 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@33 -- # iterations=2 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@34 -- # qd=2 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@35 -- # sums=() 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i = 0 )) 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 1' 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@157 -- # [[ -z ftl ]] 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@162 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@163 -- # spdk_ini_pid=84858 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@164 -- # export spdk_ini_pid 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@165 -- # waitforlisten 84858 /var/tmp/spdk.tgt.sock 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 84858 ']' 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.tgt.sock 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock...' 00:29:42.412 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.tgt.sock... 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:29:42.412 02:25:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:29:42.412 [2024-12-15 02:25:07.092852] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:42.412 [2024-12-15 02:25:07.093126] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84858 ] 00:29:42.670 [2024-12-15 02:25:07.251138] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:42.670 [2024-12-15 02:25:07.357336] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:43.236 02:25:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:29:43.236 02:25:07 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:29:43.236 02:25:07 ftl.ftl_upgrade_shutdown -- ftl/common.sh@167 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock bdev_nvme_attach_controller -b ftl -t tcp -a 127.0.0.1 -s 4420 -f ipv4 -n nqn.2018-09.io.spdk:cnode0 00:29:43.494 ftln1 00:29:43.494 02:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@171 -- # echo '{"subsystems": [' 00:29:43.494 02:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@172 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock save_subsystem_config -n bdev 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@173 -- # echo ']}' 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- ftl/common.sh@176 -- # killprocess 84858 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84858 ']' 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84858 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84858 00:29:43.752 killing process with pid 84858 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_1 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_1 = sudo ']' 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84858' 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84858 00:29:43.752 02:25:08 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84858 00:29:45.653 02:25:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@177 -- # unset spdk_ini_pid 00:29:45.653 02:25:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=0 00:29:45.653 [2024-12-15 02:25:10.027469] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:45.653 [2024-12-15 02:25:10.027747] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84900 ] 00:29:45.653 [2024-12-15 02:25:10.184186] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:45.653 [2024-12-15 02:25:10.277359] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:47.038  [2024-12-15T02:25:12.744Z] Copying: 242/1024 [MB] (242 MBps) [2024-12-15T02:25:13.685Z] Copying: 484/1024 [MB] (242 MBps) [2024-12-15T02:25:14.625Z] Copying: 722/1024 [MB] (238 MBps) [2024-12-15T02:25:14.886Z] Copying: 960/1024 [MB] (238 MBps) [2024-12-15T02:25:15.826Z] Copying: 1024/1024 [MB] (average 239 MBps) 00:29:51.061 00:29:51.061 Calculate MD5 checksum, iteration 1 00:29:51.061 02:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=1024 00:29:51.061 02:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 1' 00:29:51.061 02:25:15 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:51.062 02:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:51.062 02:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:51.062 02:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:51.062 02:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:51.062 02:25:15 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:29:51.062 [2024-12-15 02:25:15.576150] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:51.062 [2024-12-15 02:25:15.576270] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid84958 ] 00:29:51.062 [2024-12-15 02:25:15.729086] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:51.321 [2024-12-15 02:25:15.827928] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:52.705  [2024-12-15T02:25:18.043Z] Copying: 591/1024 [MB] (591 MBps) [2024-12-15T02:25:18.614Z] Copying: 1024/1024 [MB] (average 606 MBps) 00:29:53.849 00:29:53.849 02:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=1024 00:29:53.849 02:25:18 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:29:55.818 Fill FTL, iteration 2 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=f753765335ac2ad3ff91c62765a66e84 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@39 -- # echo 'Fill FTL, iteration 2' 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@40 -- # tcp_dd --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:29:55.818 02:25:20 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --if=/dev/urandom --ob=ftln1 --bs=1048576 --count=1024 --qd=2 --seek=1024 00:29:55.818 [2024-12-15 02:25:20.449511] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:29:55.818 [2024-12-15 02:25:20.449640] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85016 ] 00:29:56.095 [2024-12-15 02:25:20.611979] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:29:56.095 [2024-12-15 02:25:20.718885] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:29:57.479  [2024-12-15T02:25:23.186Z] Copying: 143/1024 [MB] (143 MBps) [2024-12-15T02:25:24.129Z] Copying: 361/1024 [MB] (218 MBps) [2024-12-15T02:25:25.511Z] Copying: 598/1024 [MB] (237 MBps) [2024-12-15T02:25:26.084Z] Copying: 835/1024 [MB] (237 MBps) [2024-12-15T02:25:26.655Z] Copying: 1024/1024 [MB] (average 213 MBps) 00:30:01.890 00:30:01.890 Calculate MD5 checksum, iteration 2 00:30:01.890 02:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@41 -- # seek=2048 00:30:01.890 02:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@43 -- # echo 'Calculate MD5 checksum, iteration 2' 00:30:01.890 02:25:26 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@44 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:01.890 02:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:01.890 02:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:01.890 02:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:01.890 02:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:01.890 02:25:26 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:01.890 [2024-12-15 02:25:26.580128] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:01.890 [2024-12-15 02:25:26.580296] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85081 ] 00:30:02.152 [2024-12-15 02:25:26.739726] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:02.152 [2024-12-15 02:25:26.843871] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:03.538  [2024-12-15T02:25:29.244Z] Copying: 665/1024 [MB] (665 MBps) [2024-12-15T02:25:30.187Z] Copying: 1024/1024 [MB] (average 640 MBps) 00:30:05.422 00:30:05.422 02:25:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@45 -- # skip=2048 00:30:05.422 02:25:29 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@47 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:07.954 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # cut -f1 '-d ' 00:30:07.954 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@48 -- # sums[i]=32689d92d1969d5b321580cf4eab64db 00:30:07.954 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i++ )) 00:30:07.954 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@38 -- # (( i < iterations )) 00:30:07.954 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@52 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:07.954 [2024-12-15 02:25:32.325228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.954 [2024-12-15 02:25:32.325265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:07.954 [2024-12-15 02:25:32.325277] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:07.954 [2024-12-15 02:25:32.325283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.954 [2024-12-15 02:25:32.325301] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.954 [2024-12-15 02:25:32.325311] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:07.954 [2024-12-15 02:25:32.325317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:07.954 [2024-12-15 02:25:32.325324] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.954 [2024-12-15 02:25:32.325339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:07.954 [2024-12-15 02:25:32.325345] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:07.954 [2024-12-15 02:25:32.325352] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:07.954 [2024-12-15 02:25:32.325357] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:07.954 [2024-12-15 02:25:32.325404] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.182 ms, result 0 00:30:07.954 true 00:30:07.954 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@53 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:07.954 { 00:30:07.954 "name": "ftl", 00:30:07.954 "properties": [ 00:30:07.954 { 00:30:07.955 "name": "superblock_version", 00:30:07.955 "value": 5, 00:30:07.955 "read-only": true 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "name": "base_device", 00:30:07.955 "bands": [ 00:30:07.955 { 00:30:07.955 "id": 0, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 1, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 2, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 3, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 4, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 5, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 6, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 7, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 8, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 9, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 10, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 11, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 12, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 13, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 14, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 15, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 16, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 17, 00:30:07.955 "state": "FREE", 00:30:07.955 "validity": 0.0 00:30:07.955 } 00:30:07.955 ], 00:30:07.955 "read-only": true 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "name": "cache_device", 00:30:07.955 "type": "bdev", 00:30:07.955 "chunks": [ 00:30:07.955 { 00:30:07.955 "id": 0, 00:30:07.955 "state": "INACTIVE", 00:30:07.955 "utilization": 0.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 1, 00:30:07.955 "state": "CLOSED", 00:30:07.955 "utilization": 1.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 2, 00:30:07.955 "state": "CLOSED", 00:30:07.955 "utilization": 1.0 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 3, 00:30:07.955 "state": "OPEN", 00:30:07.955 "utilization": 0.001953125 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "id": 4, 00:30:07.955 "state": "OPEN", 00:30:07.955 "utilization": 0.0 00:30:07.955 } 00:30:07.955 ], 00:30:07.955 "read-only": true 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "name": "verbose_mode", 00:30:07.955 "value": true, 00:30:07.955 "unit": "", 00:30:07.955 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:07.955 }, 00:30:07.955 { 00:30:07.955 "name": "prep_upgrade_on_shutdown", 00:30:07.955 "value": false, 00:30:07.955 "unit": "", 00:30:07.955 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:07.955 } 00:30:07.955 ] 00:30:07.955 } 00:30:07.955 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@56 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p prep_upgrade_on_shutdown -v true 00:30:08.213 [2024-12-15 02:25:32.744707] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.214 [2024-12-15 02:25:32.744844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:08.214 [2024-12-15 02:25:32.744897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:08.214 [2024-12-15 02:25:32.744915] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.214 [2024-12-15 02:25:32.744947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.214 [2024-12-15 02:25:32.744964] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:08.214 [2024-12-15 02:25:32.745012] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:08.214 [2024-12-15 02:25:32.745029] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.214 [2024-12-15 02:25:32.745054] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.214 [2024-12-15 02:25:32.745069] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:08.214 [2024-12-15 02:25:32.745129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:08.214 [2024-12-15 02:25:32.745143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.214 [2024-12-15 02:25:32.745263] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.545 ms, result 0 00:30:08.214 true 00:30:08.214 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # ftl_get_properties 00:30:08.214 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:08.214 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:08.214 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@63 -- # used=3 00:30:08.214 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@64 -- # [[ 3 -eq 0 ]] 00:30:08.214 02:25:32 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@70 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:08.472 [2024-12-15 02:25:33.157059] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.472 [2024-12-15 02:25:33.157091] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:08.472 [2024-12-15 02:25:33.157100] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:08.472 [2024-12-15 02:25:33.157106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.472 [2024-12-15 02:25:33.157123] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.472 [2024-12-15 02:25:33.157130] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:08.472 [2024-12-15 02:25:33.157136] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:08.472 [2024-12-15 02:25:33.157141] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.472 [2024-12-15 02:25:33.157168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:08.472 [2024-12-15 02:25:33.157175] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:08.472 [2024-12-15 02:25:33.157180] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:08.472 [2024-12-15 02:25:33.157186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:08.472 [2024-12-15 02:25:33.157239] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.172 ms, result 0 00:30:08.472 true 00:30:08.472 02:25:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@71 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:08.730 { 00:30:08.730 "name": "ftl", 00:30:08.730 "properties": [ 00:30:08.730 { 00:30:08.730 "name": "superblock_version", 00:30:08.730 "value": 5, 00:30:08.730 "read-only": true 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "name": "base_device", 00:30:08.730 "bands": [ 00:30:08.730 { 00:30:08.730 "id": 0, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 1, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 2, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 3, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 4, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 5, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 6, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 7, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 8, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 9, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 10, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 11, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 12, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 13, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 14, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 15, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.730 }, 00:30:08.730 { 00:30:08.730 "id": 16, 00:30:08.730 "state": "FREE", 00:30:08.730 "validity": 0.0 00:30:08.731 }, 00:30:08.731 { 00:30:08.731 "id": 17, 00:30:08.731 "state": "FREE", 00:30:08.731 "validity": 0.0 00:30:08.731 } 00:30:08.731 ], 00:30:08.731 "read-only": true 00:30:08.731 }, 00:30:08.731 { 00:30:08.731 "name": "cache_device", 00:30:08.731 "type": "bdev", 00:30:08.731 "chunks": [ 00:30:08.731 { 00:30:08.731 "id": 0, 00:30:08.731 "state": "INACTIVE", 00:30:08.731 "utilization": 0.0 00:30:08.731 }, 00:30:08.731 { 00:30:08.731 "id": 1, 00:30:08.731 "state": "CLOSED", 00:30:08.731 "utilization": 1.0 00:30:08.731 }, 00:30:08.731 { 00:30:08.731 "id": 2, 00:30:08.731 "state": "CLOSED", 00:30:08.731 "utilization": 1.0 00:30:08.731 }, 00:30:08.731 { 00:30:08.731 "id": 3, 00:30:08.731 "state": "OPEN", 00:30:08.731 "utilization": 0.001953125 00:30:08.731 }, 00:30:08.731 { 00:30:08.731 "id": 4, 00:30:08.731 "state": "OPEN", 00:30:08.731 "utilization": 0.0 00:30:08.731 } 00:30:08.731 ], 00:30:08.731 "read-only": true 00:30:08.731 }, 00:30:08.731 { 00:30:08.731 "name": "verbose_mode", 00:30:08.731 "value": true, 00:30:08.731 "unit": "", 00:30:08.731 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:08.731 }, 00:30:08.731 { 00:30:08.731 "name": "prep_upgrade_on_shutdown", 00:30:08.731 "value": true, 00:30:08.731 "unit": "", 00:30:08.731 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:08.731 } 00:30:08.731 ] 00:30:08.731 } 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@74 -- # tcp_target_shutdown 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 84729 ]] 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 84729 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 84729 ']' 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 84729 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 84729 00:30:08.731 killing process with pid 84729 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 84729' 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 84729 00:30:08.731 02:25:33 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 84729 00:30:09.297 [2024-12-15 02:25:33.936332] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:09.297 [2024-12-15 02:25:33.947492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.297 [2024-12-15 02:25:33.947527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:09.297 [2024-12-15 02:25:33.947537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:09.297 [2024-12-15 02:25:33.947543] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:09.297 [2024-12-15 02:25:33.947561] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:09.297 [2024-12-15 02:25:33.949635] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:09.298 [2024-12-15 02:25:33.949658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:09.298 [2024-12-15 02:25:33.949666] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.063 ms 00:30:09.298 [2024-12-15 02:25:33.949673] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.427 [2024-12-15 02:25:41.798764] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.427 [2024-12-15 02:25:41.798838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:17.427 [2024-12-15 02:25:41.798862] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7849.038 ms 00:30:17.427 [2024-12-15 02:25:41.798872] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.427 [2024-12-15 02:25:41.800550] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.427 [2024-12-15 02:25:41.800588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:17.427 [2024-12-15 02:25:41.800599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.658 ms 00:30:17.427 [2024-12-15 02:25:41.800608] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.427 [2024-12-15 02:25:41.801741] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.427 [2024-12-15 02:25:41.801764] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:17.427 [2024-12-15 02:25:41.801775] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.100 ms 00:30:17.427 [2024-12-15 02:25:41.801790] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.427 [2024-12-15 02:25:41.813074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.427 [2024-12-15 02:25:41.813123] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:17.427 [2024-12-15 02:25:41.813135] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 11.247 ms 00:30:17.427 [2024-12-15 02:25:41.813143] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.427 [2024-12-15 02:25:41.820838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.427 [2024-12-15 02:25:41.820885] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:17.427 [2024-12-15 02:25:41.820897] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.648 ms 00:30:17.427 [2024-12-15 02:25:41.820905] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.427 [2024-12-15 02:25:41.821021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.427 [2024-12-15 02:25:41.821040] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:17.427 [2024-12-15 02:25:41.821050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.070 ms 00:30:17.427 [2024-12-15 02:25:41.821058] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.427 [2024-12-15 02:25:41.831649] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.427 [2024-12-15 02:25:41.831691] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:17.427 [2024-12-15 02:25:41.831702] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.573 ms 00:30:17.427 [2024-12-15 02:25:41.831710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.427 [2024-12-15 02:25:41.842084] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.427 [2024-12-15 02:25:41.842128] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:17.427 [2024-12-15 02:25:41.842138] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.330 ms 00:30:17.428 [2024-12-15 02:25:41.842145] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.852466] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.428 [2024-12-15 02:25:41.852511] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:17.428 [2024-12-15 02:25:41.852521] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 10.275 ms 00:30:17.428 [2024-12-15 02:25:41.852527] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.862230] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.428 [2024-12-15 02:25:41.862273] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:17.428 [2024-12-15 02:25:41.862283] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.609 ms 00:30:17.428 [2024-12-15 02:25:41.862290] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.862332] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:17.428 [2024-12-15 02:25:41.862358] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:17.428 [2024-12-15 02:25:41.862368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:17.428 [2024-12-15 02:25:41.862377] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:17.428 [2024-12-15 02:25:41.862386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862419] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862426] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862458] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862465] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862481] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:17.428 [2024-12-15 02:25:41.862505] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:17.428 [2024-12-15 02:25:41.862512] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f0c11f49-e1c8-4c0f-9224-6a2fd4d8c2d2 00:30:17.428 [2024-12-15 02:25:41.862520] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:17.428 [2024-12-15 02:25:41.862527] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 786752 00:30:17.428 [2024-12-15 02:25:41.862534] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 524288 00:30:17.428 [2024-12-15 02:25:41.862543] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: 1.5006 00:30:17.428 [2024-12-15 02:25:41.862553] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:17.428 [2024-12-15 02:25:41.862561] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:17.428 [2024-12-15 02:25:41.862574] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:17.428 [2024-12-15 02:25:41.862580] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:17.428 [2024-12-15 02:25:41.862588] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:17.428 [2024-12-15 02:25:41.862598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.428 [2024-12-15 02:25:41.862607] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:17.428 [2024-12-15 02:25:41.862615] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:30:17.428 [2024-12-15 02:25:41.862623] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.876102] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.428 [2024-12-15 02:25:41.876148] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:17.428 [2024-12-15 02:25:41.876166] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 13.460 ms 00:30:17.428 [2024-12-15 02:25:41.876175] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.876605] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:17.428 [2024-12-15 02:25:41.876625] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:17.428 [2024-12-15 02:25:41.876636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.366 ms 00:30:17.428 [2024-12-15 02:25:41.876644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.923238] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:41.923293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:17.428 [2024-12-15 02:25:41.923306] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:41.923314] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.923352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:41.923361] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:17.428 [2024-12-15 02:25:41.923370] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:41.923378] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.923473] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:41.923486] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:17.428 [2024-12-15 02:25:41.923498] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:41.923507] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:41.923525] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:41.923534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:17.428 [2024-12-15 02:25:41.923542] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:41.923550] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:42.007169] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:42.007255] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:17.428 [2024-12-15 02:25:42.007274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:42.007284] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:42.075994] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:42.076050] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:17.428 [2024-12-15 02:25:42.076062] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:42.076071] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:42.076148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:42.076159] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:17.428 [2024-12-15 02:25:42.076168] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:42.076178] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:42.076274] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:42.076286] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:17.428 [2024-12-15 02:25:42.076295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:42.076303] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:42.076403] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:42.076415] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:17.428 [2024-12-15 02:25:42.076424] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:42.076432] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:42.076468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:42.076478] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:17.428 [2024-12-15 02:25:42.076487] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:42.076496] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.428 [2024-12-15 02:25:42.076540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.428 [2024-12-15 02:25:42.076549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:17.428 [2024-12-15 02:25:42.076557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.428 [2024-12-15 02:25:42.076565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.429 [2024-12-15 02:25:42.076616] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:17.429 [2024-12-15 02:25:42.076628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:17.429 [2024-12-15 02:25:42.076636] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:17.429 [2024-12-15 02:25:42.076644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:17.429 [2024-12-15 02:25:42.076785] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 8129.222 ms, result 0 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@75 -- # tcp_target_setup 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:24.007 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85271 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85271 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85271 ']' 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:24.007 02:25:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:24.008 02:25:47 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:24.008 [2024-12-15 02:25:47.900906] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:24.008 [2024-12-15 02:25:47.901020] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85271 ] 00:30:24.008 [2024-12-15 02:25:48.054287] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:24.008 [2024-12-15 02:25:48.176875] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:24.268 [2024-12-15 02:25:48.923130] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:24.268 [2024-12-15 02:25:48.923238] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:24.529 [2024-12-15 02:25:49.076610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.076834] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:24.529 [2024-12-15 02:25:49.076860] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:24.529 [2024-12-15 02:25:49.076869] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.076951] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.076962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:24.529 [2024-12-15 02:25:49.076971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.052 ms 00:30:24.529 [2024-12-15 02:25:49.076979] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.077009] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:24.529 [2024-12-15 02:25:49.078224] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:24.529 [2024-12-15 02:25:49.078280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.078290] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:24.529 [2024-12-15 02:25:49.078301] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.281 ms 00:30:24.529 [2024-12-15 02:25:49.078309] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.080051] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:24.529 [2024-12-15 02:25:49.094249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.094296] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:24.529 [2024-12-15 02:25:49.094317] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.200 ms 00:30:24.529 [2024-12-15 02:25:49.094326] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.094404] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.094416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:24.529 [2024-12-15 02:25:49.094426] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.028 ms 00:30:24.529 [2024-12-15 02:25:49.094433] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.102618] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.102661] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:24.529 [2024-12-15 02:25:49.102672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 8.099 ms 00:30:24.529 [2024-12-15 02:25:49.102681] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.102750] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.102761] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:24.529 [2024-12-15 02:25:49.102770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.045 ms 00:30:24.529 [2024-12-15 02:25:49.102777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.102830] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.102844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:24.529 [2024-12-15 02:25:49.102853] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:24.529 [2024-12-15 02:25:49.102862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.102889] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:24.529 [2024-12-15 02:25:49.106938] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.106977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:24.529 [2024-12-15 02:25:49.106988] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.056 ms 00:30:24.529 [2024-12-15 02:25:49.107000] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.107032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.107041] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:24.529 [2024-12-15 02:25:49.107050] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:24.529 [2024-12-15 02:25:49.107057] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.107106] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:24.529 [2024-12-15 02:25:49.107131] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:24.529 [2024-12-15 02:25:49.107169] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:24.529 [2024-12-15 02:25:49.107185] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:24.529 [2024-12-15 02:25:49.107327] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:24.529 [2024-12-15 02:25:49.107340] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:24.529 [2024-12-15 02:25:49.107352] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:24.529 [2024-12-15 02:25:49.107362] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:24.529 [2024-12-15 02:25:49.107372] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:24.529 [2024-12-15 02:25:49.107384] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:24.529 [2024-12-15 02:25:49.107391] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:24.529 [2024-12-15 02:25:49.107399] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:24.529 [2024-12-15 02:25:49.107407] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:24.529 [2024-12-15 02:25:49.107416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.107424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:24.529 [2024-12-15 02:25:49.107432] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.313 ms 00:30:24.529 [2024-12-15 02:25:49.107440] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.529 [2024-12-15 02:25:49.107526] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.529 [2024-12-15 02:25:49.107534] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:24.529 [2024-12-15 02:25:49.107546] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:24.530 [2024-12-15 02:25:49.107553] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.530 [2024-12-15 02:25:49.107655] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:24.530 [2024-12-15 02:25:49.107665] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:24.530 [2024-12-15 02:25:49.107674] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:24.530 [2024-12-15 02:25:49.107683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107692] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:24.530 [2024-12-15 02:25:49.107699] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107706] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:24.530 [2024-12-15 02:25:49.107714] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:24.530 [2024-12-15 02:25:49.107721] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:24.530 [2024-12-15 02:25:49.107727] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107733] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:24.530 [2024-12-15 02:25:49.107740] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:24.530 [2024-12-15 02:25:49.107747] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107754] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:24.530 [2024-12-15 02:25:49.107761] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:24.530 [2024-12-15 02:25:49.107769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107777] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:24.530 [2024-12-15 02:25:49.107783] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:24.530 [2024-12-15 02:25:49.107790] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107796] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:24.530 [2024-12-15 02:25:49.107803] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:24.530 [2024-12-15 02:25:49.107810] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:24.530 [2024-12-15 02:25:49.107819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:24.530 [2024-12-15 02:25:49.107834] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:24.530 [2024-12-15 02:25:49.107841] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:24.530 [2024-12-15 02:25:49.107848] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:24.530 [2024-12-15 02:25:49.107856] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:24.530 [2024-12-15 02:25:49.107862] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:24.530 [2024-12-15 02:25:49.107869] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:24.530 [2024-12-15 02:25:49.107876] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:24.530 [2024-12-15 02:25:49.107883] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:24.530 [2024-12-15 02:25:49.107891] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:24.530 [2024-12-15 02:25:49.107897] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:24.530 [2024-12-15 02:25:49.107904] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107911] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:24.530 [2024-12-15 02:25:49.107918] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:24.530 [2024-12-15 02:25:49.107925] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107932] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:24.530 [2024-12-15 02:25:49.107939] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107945] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107952] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:24.530 [2024-12-15 02:25:49.107959] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:24.530 [2024-12-15 02:25:49.107966] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.107972] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:24.530 [2024-12-15 02:25:49.107979] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:24.530 [2024-12-15 02:25:49.107987] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:24.530 [2024-12-15 02:25:49.107994] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:24.530 [2024-12-15 02:25:49.108005] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:24.530 [2024-12-15 02:25:49.108012] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:24.530 [2024-12-15 02:25:49.108018] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:24.530 [2024-12-15 02:25:49.108025] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:24.530 [2024-12-15 02:25:49.108032] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:24.530 [2024-12-15 02:25:49.108039] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:24.530 [2024-12-15 02:25:49.108048] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:24.530 [2024-12-15 02:25:49.108059] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108068] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:24.530 [2024-12-15 02:25:49.108077] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108084] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108093] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:24.530 [2024-12-15 02:25:49.108100] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:24.530 [2024-12-15 02:25:49.108108] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:24.530 [2024-12-15 02:25:49.108115] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:24.530 [2024-12-15 02:25:49.108123] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108130] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108137] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108145] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108152] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108160] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108168] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:24.530 [2024-12-15 02:25:49.108175] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:24.530 [2024-12-15 02:25:49.108183] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108205] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:24.530 [2024-12-15 02:25:49.108214] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:24.530 [2024-12-15 02:25:49.108220] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:24.530 [2024-12-15 02:25:49.108228] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:24.530 [2024-12-15 02:25:49.108235] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:24.530 [2024-12-15 02:25:49.108242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:24.530 [2024-12-15 02:25:49.108250] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.650 ms 00:30:24.530 [2024-12-15 02:25:49.108257] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:24.530 [2024-12-15 02:25:49.108300] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] NV cache data region needs scrubbing, this may take a while. 00:30:24.530 [2024-12-15 02:25:49.108311] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl] Scrubbing 5 chunks 00:30:28.796 [2024-12-15 02:25:53.262988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.796 [2024-12-15 02:25:53.263065] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Scrub NV cache 00:30:28.796 [2024-12-15 02:25:53.263084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4154.673 ms 00:30:28.796 [2024-12-15 02:25:53.263094] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.796 [2024-12-15 02:25:53.294597] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.796 [2024-12-15 02:25:53.294831] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:28.796 [2024-12-15 02:25:53.294854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 31.217 ms 00:30:28.796 [2024-12-15 02:25:53.294863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.796 [2024-12-15 02:25:53.294955] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.796 [2024-12-15 02:25:53.294975] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:28.796 [2024-12-15 02:25:53.294986] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:28.796 [2024-12-15 02:25:53.294995] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.796 [2024-12-15 02:25:53.330855] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.796 [2024-12-15 02:25:53.330904] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:28.796 [2024-12-15 02:25:53.330916] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 35.801 ms 00:30:28.796 [2024-12-15 02:25:53.330929] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.796 [2024-12-15 02:25:53.330973] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.330984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:28.797 [2024-12-15 02:25:53.330994] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:28.797 [2024-12-15 02:25:53.331002] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.331589] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.331614] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:28.797 [2024-12-15 02:25:53.331626] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.528 ms 00:30:28.797 [2024-12-15 02:25:53.331636] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.331693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.331705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:28.797 [2024-12-15 02:25:53.331715] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.021 ms 00:30:28.797 [2024-12-15 02:25:53.331724] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.349316] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.349515] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:28.797 [2024-12-15 02:25:53.349537] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.566 ms 00:30:28.797 [2024-12-15 02:25:53.349546] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.382249] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 0, empty chunks = 4 00:30:28.797 [2024-12-15 02:25:53.382471] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:28.797 [2024-12-15 02:25:53.382495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.382506] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore NV cache metadata 00:30:28.797 [2024-12-15 02:25:53.382517] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 32.829 ms 00:30:28.797 [2024-12-15 02:25:53.382525] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.397063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.397114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid map metadata 00:30:28.797 [2024-12-15 02:25:53.397127] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 14.489 ms 00:30:28.797 [2024-12-15 02:25:53.397136] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.409710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.409753] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore band info metadata 00:30:28.797 [2024-12-15 02:25:53.409765] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.515 ms 00:30:28.797 [2024-12-15 02:25:53.409774] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.422310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.422367] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore trim metadata 00:30:28.797 [2024-12-15 02:25:53.422379] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.485 ms 00:30:28.797 [2024-12-15 02:25:53.422387] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.423044] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.423072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:28.797 [2024-12-15 02:25:53.423084] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.541 ms 00:30:28.797 [2024-12-15 02:25:53.423093] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.485817] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.485877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:28.797 [2024-12-15 02:25:53.485892] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 62.703 ms 00:30:28.797 [2024-12-15 02:25:53.485901] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.497268] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:28.797 [2024-12-15 02:25:53.498257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.498295] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:28.797 [2024-12-15 02:25:53.498308] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 12.291 ms 00:30:28.797 [2024-12-15 02:25:53.498317] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.498411] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.498427] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P 00:30:28.797 [2024-12-15 02:25:53.498437] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.018 ms 00:30:28.797 [2024-12-15 02:25:53.498445] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.498523] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.498536] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:28.797 [2024-12-15 02:25:53.498545] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.020 ms 00:30:28.797 [2024-12-15 02:25:53.498554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.498580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.498590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:28.797 [2024-12-15 02:25:53.498602] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:28.797 [2024-12-15 02:25:53.498610] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.498644] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:28.797 [2024-12-15 02:25:53.498656] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.498667] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:28.797 [2024-12-15 02:25:53.498676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.014 ms 00:30:28.797 [2024-12-15 02:25:53.498684] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.523543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.523596] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL dirty state 00:30:28.797 [2024-12-15 02:25:53.523609] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.836 ms 00:30:28.797 [2024-12-15 02:25:53.523618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.523709] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:28.797 [2024-12-15 02:25:53.523720] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:28.797 [2024-12-15 02:25:53.523729] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.037 ms 00:30:28.797 [2024-12-15 02:25:53.523738] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:28.797 [2024-12-15 02:25:53.525089] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 4447.951 ms, result 0 00:30:28.797 [2024-12-15 02:25:53.539959] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:28.797 [2024-12-15 02:25:53.555962] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:29.058 [2024-12-15 02:25:53.564127] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:29.320 02:25:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:29.320 02:25:53 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:29.320 02:25:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:29.320 02:25:53 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:29.320 02:25:53 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@78 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_set_property -b ftl -p verbose_mode -v true 00:30:29.582 [2024-12-15 02:25:54.104491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.582 [2024-12-15 02:25:54.104540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decode property 00:30:29.582 [2024-12-15 02:25:54.104558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:29.582 [2024-12-15 02:25:54.104566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.582 [2024-12-15 02:25:54.104591] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.582 [2024-12-15 02:25:54.104601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set property 00:30:29.582 [2024-12-15 02:25:54.104610] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:29.582 [2024-12-15 02:25:54.104618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.582 [2024-12-15 02:25:54.104639] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:29.582 [2024-12-15 02:25:54.104648] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Property setting cleanup 00:30:29.582 [2024-12-15 02:25:54.104657] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:29.582 [2024-12-15 02:25:54.104666] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:29.582 [2024-12-15 02:25:54.104744] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Set FTL property', duration = 0.228 ms, result 0 00:30:29.582 true 00:30:29.582 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@79 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:29.582 { 00:30:29.582 "name": "ftl", 00:30:29.582 "properties": [ 00:30:29.582 { 00:30:29.582 "name": "superblock_version", 00:30:29.582 "value": 5, 00:30:29.582 "read-only": true 00:30:29.582 }, 00:30:29.582 { 00:30:29.582 "name": "base_device", 00:30:29.582 "bands": [ 00:30:29.583 { 00:30:29.583 "id": 0, 00:30:29.583 "state": "CLOSED", 00:30:29.583 "validity": 1.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 1, 00:30:29.583 "state": "CLOSED", 00:30:29.583 "validity": 1.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 2, 00:30:29.583 "state": "CLOSED", 00:30:29.583 "validity": 0.007843137254901933 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 3, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 4, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 5, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 6, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 7, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 8, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 9, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 10, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 11, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 12, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 13, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 14, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 15, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 16, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 17, 00:30:29.583 "state": "FREE", 00:30:29.583 "validity": 0.0 00:30:29.583 } 00:30:29.583 ], 00:30:29.583 "read-only": true 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "name": "cache_device", 00:30:29.583 "type": "bdev", 00:30:29.583 "chunks": [ 00:30:29.583 { 00:30:29.583 "id": 0, 00:30:29.583 "state": "INACTIVE", 00:30:29.583 "utilization": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 1, 00:30:29.583 "state": "OPEN", 00:30:29.583 "utilization": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 2, 00:30:29.583 "state": "OPEN", 00:30:29.583 "utilization": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 3, 00:30:29.583 "state": "FREE", 00:30:29.583 "utilization": 0.0 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "id": 4, 00:30:29.583 "state": "FREE", 00:30:29.583 "utilization": 0.0 00:30:29.583 } 00:30:29.583 ], 00:30:29.583 "read-only": true 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "name": "verbose_mode", 00:30:29.583 "value": true, 00:30:29.583 "unit": "", 00:30:29.583 "desc": "In verbose mode, user is able to get access to additional advanced FTL properties" 00:30:29.583 }, 00:30:29.583 { 00:30:29.583 "name": "prep_upgrade_on_shutdown", 00:30:29.583 "value": false, 00:30:29.583 "unit": "", 00:30:29.583 "desc": "During shutdown, FTL executes all actions which are needed for upgrade to a new version" 00:30:29.583 } 00:30:29.583 ] 00:30:29.583 } 00:30:29.583 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # jq '[.properties[] | select(.name == "cache_device") | .chunks[] | select(.utilization != 0.0)] | length' 00:30:29.583 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # ftl_get_properties 00:30:29.583 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:29.845 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@82 -- # used=0 00:30:29.845 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@83 -- # [[ 0 -ne 0 ]] 00:30:29.845 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # ftl_get_properties 00:30:29.845 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # jq '[.properties[] | select(.name == "bands") | .bands[] | select(.state == "OPENED")] | length' 00:30:29.845 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@59 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_get_properties -b ftl 00:30:30.107 Validate MD5 checksum, iteration 1 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@89 -- # opened=0 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@90 -- # [[ 0 -ne 0 ]] 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@111 -- # test_validate_checksum 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:30.107 02:25:54 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:30.107 [2024-12-15 02:25:54.846064] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:30.107 [2024-12-15 02:25:54.846903] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85365 ] 00:30:30.369 [2024-12-15 02:25:55.018914] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:30.630 [2024-12-15 02:25:55.142760] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:32.018  [2024-12-15T02:25:57.727Z] Copying: 562/1024 [MB] (562 MBps) [2024-12-15T02:25:58.670Z] Copying: 1024/1024 [MB] (average 593 MBps) 00:30:33.905 00:30:33.905 02:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:33.905 02:25:58 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f753765335ac2ad3ff91c62765a66e84 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f753765335ac2ad3ff91c62765a66e84 != \f\7\5\3\7\6\5\3\3\5\a\c\2\a\d\3\f\f\9\1\c\6\2\7\6\5\a\6\6\e\8\4 ]] 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:36.449 Validate MD5 checksum, iteration 2 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:36.449 02:26:00 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:36.449 [2024-12-15 02:26:00.721345] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:36.449 [2024-12-15 02:26:00.721589] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85432 ] 00:30:36.449 [2024-12-15 02:26:00.881292] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:36.449 [2024-12-15 02:26:00.986971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:37.837  [2024-12-15T02:26:03.543Z] Copying: 536/1024 [MB] (536 MBps) [2024-12-15T02:26:04.486Z] Copying: 1024/1024 [MB] (average 562 MBps) 00:30:39.721 00:30:39.721 02:26:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:39.721 02:26:04 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=32689d92d1969d5b321580cf4eab64db 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 32689d92d1969d5b321580cf4eab64db != \3\2\6\8\9\d\9\2\d\1\9\6\9\d\5\b\3\2\1\5\8\0\c\f\4\e\a\b\6\4\d\b ]] 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@114 -- # tcp_target_shutdown_dirty 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@137 -- # [[ -n 85271 ]] 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@138 -- # kill -9 85271 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@139 -- # unset spdk_tgt_pid 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@115 -- # tcp_target_setup 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@81 -- # local base_bdev= 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@82 -- # local cache_bdev= 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@84 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@89 -- # spdk_tgt_pid=85488 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@85 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt '--cpumask=[0]' --config=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@90 -- # export spdk_tgt_pid 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- ftl/common.sh@91 -- # waitforlisten 85488 00:30:41.627 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@835 -- # '[' -z 85488 ']' 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:41.627 02:26:06 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:41.627 [2024-12-15 02:26:06.311257] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:41.627 [2024-12-15 02:26:06.311375] [ DPDK EAL parameters: spdk_tgt --no-shconf -l 0 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85488 ] 00:30:41.889 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 834: 85271 Killed $spdk_tgt_bin "--cpumask=$spdk_tgt_cpumask" --config="$spdk_tgt_cnfg" 00:30:41.889 [2024-12-15 02:26:06.472503] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:41.889 [2024-12-15 02:26:06.595257] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:30:42.833 [2024-12-15 02:26:07.440949] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:42.833 [2024-12-15 02:26:07.441050] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: cachen1 00:30:42.833 [2024-12-15 02:26:07.595652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.595919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Check configuration 00:30:43.097 [2024-12-15 02:26:07.595949] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.006 ms 00:30:43.097 [2024-12-15 02:26:07.595958] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.596051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.596063] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:43.097 [2024-12-15 02:26:07.596073] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.062 ms 00:30:43.097 [2024-12-15 02:26:07.596082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.596114] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using cachen1p0 as write buffer cache 00:30:43.097 [2024-12-15 02:26:07.596870] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl] Using bdev as NV Cache device 00:30:43.097 [2024-12-15 02:26:07.596893] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.596902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:43.097 [2024-12-15 02:26:07.596912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.791 ms 00:30:43.097 [2024-12-15 02:26:07.596921] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.597302] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl] SHM: clean 0, shm_clean 0 00:30:43.097 [2024-12-15 02:26:07.618314] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.618526] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Load super block 00:30:43.097 [2024-12-15 02:26:07.618550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 21.012 ms 00:30:43.097 [2024-12-15 02:26:07.618559] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.628865] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.628918] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Validate super block 00:30:43.097 [2024-12-15 02:26:07.628931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.053 ms 00:30:43.097 [2024-12-15 02:26:07.628940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.629366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.629380] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:43.097 [2024-12-15 02:26:07.629390] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.329 ms 00:30:43.097 [2024-12-15 02:26:07.629399] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.629465] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.629476] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:43.097 [2024-12-15 02:26:07.629486] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.046 ms 00:30:43.097 [2024-12-15 02:26:07.629495] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.629522] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.629531] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Register IO device 00:30:43.097 [2024-12-15 02:26:07.629540] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:43.097 [2024-12-15 02:26:07.629549] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.629573] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on app_thread 00:30:43.097 [2024-12-15 02:26:07.633056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.633276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:43.097 [2024-12-15 02:26:07.633298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 3.489 ms 00:30:43.097 [2024-12-15 02:26:07.633310] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.633366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.633377] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Decorate bands 00:30:43.097 [2024-12-15 02:26:07.633387] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.007 ms 00:30:43.097 [2024-12-15 02:26:07.633395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.097 [2024-12-15 02:26:07.633437] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl] FTL layout setup mode 0 00:30:43.097 [2024-12-15 02:26:07.633464] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob load 0x150 bytes 00:30:43.097 [2024-12-15 02:26:07.633506] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] base layout blob load 0x48 bytes 00:30:43.097 [2024-12-15 02:26:07.633527] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl] layout blob load 0x190 bytes 00:30:43.097 [2024-12-15 02:26:07.633640] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] nvc layout blob store 0x150 bytes 00:30:43.097 [2024-12-15 02:26:07.633651] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] base layout blob store 0x48 bytes 00:30:43.097 [2024-12-15 02:26:07.633663] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl] layout blob store 0x190 bytes 00:30:43.097 [2024-12-15 02:26:07.633674] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl] Base device capacity: 20480.00 MiB 00:30:43.097 [2024-12-15 02:26:07.633684] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache device capacity: 5120.00 MiB 00:30:43.097 [2024-12-15 02:26:07.633693] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P entries: 3774873 00:30:43.097 [2024-12-15 02:26:07.633702] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl] L2P address size: 4 00:30:43.097 [2024-12-15 02:26:07.633710] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl] P2L checkpoint pages: 2048 00:30:43.097 [2024-12-15 02:26:07.633717] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl] NV cache chunk count 5 00:30:43.097 [2024-12-15 02:26:07.633725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.097 [2024-12-15 02:26:07.633737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize layout 00:30:43.097 [2024-12-15 02:26:07.633746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.293 ms 00:30:43.098 [2024-12-15 02:26:07.633753] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.098 [2024-12-15 02:26:07.633839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.098 [2024-12-15 02:26:07.633846] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Verify layout 00:30:43.098 [2024-12-15 02:26:07.633854] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.069 ms 00:30:43.098 [2024-12-15 02:26:07.633862] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.098 [2024-12-15 02:26:07.633979] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl] NV cache layout: 00:30:43.098 [2024-12-15 02:26:07.633991] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb 00:30:43.098 [2024-12-15 02:26:07.634003] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:43.098 [2024-12-15 02:26:07.634011] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634026] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region l2p 00:30:43.098 [2024-12-15 02:26:07.634034] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634041] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 14.50 MiB 00:30:43.098 [2024-12-15 02:26:07.634049] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md 00:30:43.098 [2024-12-15 02:26:07.634057] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.62 MiB 00:30:43.098 [2024-12-15 02:26:07.634065] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634073] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region band_md_mirror 00:30:43.098 [2024-12-15 02:26:07.634080] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.75 MiB 00:30:43.098 [2024-12-15 02:26:07.634087] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634094] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md 00:30:43.098 [2024-12-15 02:26:07.634101] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.38 MiB 00:30:43.098 [2024-12-15 02:26:07.634108] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634115] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region nvc_md_mirror 00:30:43.098 [2024-12-15 02:26:07.634121] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.50 MiB 00:30:43.098 [2024-12-15 02:26:07.634128] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634134] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l0 00:30:43.098 [2024-12-15 02:26:07.634141] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 14.88 MiB 00:30:43.098 [2024-12-15 02:26:07.634156] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.098 [2024-12-15 02:26:07.634162] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l1 00:30:43.098 [2024-12-15 02:26:07.634169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 22.88 MiB 00:30:43.098 [2024-12-15 02:26:07.634176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.098 [2024-12-15 02:26:07.634183] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l2 00:30:43.098 [2024-12-15 02:26:07.634190] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 30.88 MiB 00:30:43.098 [2024-12-15 02:26:07.634212] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.098 [2024-12-15 02:26:07.634219] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region p2l3 00:30:43.098 [2024-12-15 02:26:07.634225] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 38.88 MiB 00:30:43.098 [2024-12-15 02:26:07.634232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 8.00 MiB 00:30:43.098 [2024-12-15 02:26:07.634238] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md 00:30:43.098 [2024-12-15 02:26:07.634245] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 46.88 MiB 00:30:43.098 [2024-12-15 02:26:07.634252] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634258] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_md_mirror 00:30:43.098 [2024-12-15 02:26:07.634265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.00 MiB 00:30:43.098 [2024-12-15 02:26:07.634276] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634284] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log 00:30:43.098 [2024-12-15 02:26:07.634291] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634297] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634305] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region trim_log_mirror 00:30:43.098 [2024-12-15 02:26:07.634312] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 47.25 MiB 00:30:43.098 [2024-12-15 02:26:07.634318] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634325] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl] Base device layout: 00:30:43.098 [2024-12-15 02:26:07.634334] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region sb_mirror 00:30:43.098 [2024-12-15 02:26:07.634342] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.00 MiB 00:30:43.098 [2024-12-15 02:26:07.634349] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.12 MiB 00:30:43.098 [2024-12-15 02:26:07.634357] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region vmap 00:30:43.098 [2024-12-15 02:26:07.634364] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 18432.25 MiB 00:30:43.098 [2024-12-15 02:26:07.634371] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 0.88 MiB 00:30:43.098 [2024-12-15 02:26:07.634378] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl] Region data_btm 00:30:43.098 [2024-12-15 02:26:07.634384] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl] offset: 0.25 MiB 00:30:43.098 [2024-12-15 02:26:07.634391] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl] blocks: 18432.00 MiB 00:30:43.098 [2024-12-15 02:26:07.634400] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - nvc: 00:30:43.098 [2024-12-15 02:26:07.634410] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634420] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0xe80 00:30:43.098 [2024-12-15 02:26:07.634427] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x3 ver:2 blk_offs:0xea0 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634434] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x4 ver:2 blk_offs:0xec0 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634441] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xa ver:2 blk_offs:0xee0 blk_sz:0x800 00:30:43.098 [2024-12-15 02:26:07.634449] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xb ver:2 blk_offs:0x16e0 blk_sz:0x800 00:30:43.098 [2024-12-15 02:26:07.634456] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xc ver:2 blk_offs:0x1ee0 blk_sz:0x800 00:30:43.098 [2024-12-15 02:26:07.634463] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xd ver:2 blk_offs:0x26e0 blk_sz:0x800 00:30:43.098 [2024-12-15 02:26:07.634470] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xe ver:0 blk_offs:0x2ee0 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634476] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xf ver:0 blk_offs:0x2f00 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634483] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x10 ver:1 blk_offs:0x2f20 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634490] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x11 ver:1 blk_offs:0x2f40 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634498] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x6 ver:2 blk_offs:0x2f60 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634504] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x7 ver:2 blk_offs:0x2f80 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634518] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x2fa0 blk_sz:0x13d060 00:30:43.098 [2024-12-15 02:26:07.634526] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] SB metadata layout - base dev: 00:30:43.098 [2024-12-15 02:26:07.634535] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634546] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:30:43.098 [2024-12-15 02:26:07.634555] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x480000 00:30:43.099 [2024-12-15 02:26:07.634563] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0x5 ver:0 blk_offs:0x480040 blk_sz:0xe0 00:30:43.099 [2024-12-15 02:26:07.634570] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl] Region type:0xfffffffe ver:0 blk_offs:0x480120 blk_sz:0x7fee0 00:30:43.099 [2024-12-15 02:26:07.634579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.634587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Layout upgrade 00:30:43.099 [2024-12-15 02:26:07.634595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.683 ms 00:30:43.099 [2024-12-15 02:26:07.634603] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.668694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.668886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:43.099 [2024-12-15 02:26:07.668953] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 34.037 ms 00:30:43.099 [2024-12-15 02:26:07.668977] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.669041] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.669067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize band addresses 00:30:43.099 [2024-12-15 02:26:07.669088] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.017 ms 00:30:43.099 [2024-12-15 02:26:07.669109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.709837] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.710053] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:43.099 [2024-12-15 02:26:07.710129] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 40.644 ms 00:30:43.099 [2024-12-15 02:26:07.710155] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.710233] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.710301] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:43.099 [2024-12-15 02:26:07.710329] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:43.099 [2024-12-15 02:26:07.710362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.710557] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.710681] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:43.099 [2024-12-15 02:26:07.710736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.067 ms 00:30:43.099 [2024-12-15 02:26:07.710760] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.710887] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.710917] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:43.099 [2024-12-15 02:26:07.710970] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.033 ms 00:30:43.099 [2024-12-15 02:26:07.710994] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.728369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.728480] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:43.099 [2024-12-15 02:26:07.728528] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 17.325 ms 00:30:43.099 [2024-12-15 02:26:07.728554] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.728667] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.728696] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize recovery 00:30:43.099 [2024-12-15 02:26:07.728717] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:43.099 [2024-12-15 02:26:07.728735] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.753461] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.753601] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover band state 00:30:43.099 [2024-12-15 02:26:07.753661] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 24.697 ms 00:30:43.099 [2024-12-15 02:26:07.753685] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.763327] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.763433] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize P2L checkpointing 00:30:43.099 [2024-12-15 02:26:07.763496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.534 ms 00:30:43.099 [2024-12-15 02:26:07.763518] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.824021] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.824179] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore P2L checkpoints 00:30:43.099 [2024-12-15 02:26:07.824273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 60.433 ms 00:30:43.099 [2024-12-15 02:26:07.824299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.824461] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=0 found seq_id=8 00:30:43.099 [2024-12-15 02:26:07.824603] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=1 found seq_id=9 00:30:43.099 [2024-12-15 02:26:07.824854] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=2 found seq_id=12 00:30:43.099 [2024-12-15 02:26:07.824993] mngt/ftl_mngt_recovery.c: 596:p2l_ckpt_preprocess: *NOTICE*: [FTL][ftl] P2L ckpt_id=3 found seq_id=0 00:30:43.099 [2024-12-15 02:26:07.825061] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.825114] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Preprocess P2L checkpoints 00:30:43.099 [2024-12-15 02:26:07.825139] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.710 ms 00:30:43.099 [2024-12-15 02:26:07.825159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.825266] mngt/ftl_mngt_recovery.c: 650:ftl_mngt_recovery_open_bands_p2l: *NOTICE*: [FTL][ftl] No more open bands to recover from P2L 00:30:43.099 [2024-12-15 02:26:07.825352] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.825381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open bands P2L 00:30:43.099 [2024-12-15 02:26:07.825402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.087 ms 00:30:43.099 [2024-12-15 02:26:07.825421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.841598] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.841731] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover chunk state 00:30:43.099 [2024-12-15 02:26:07.841783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 16.141 ms 00:30:43.099 [2024-12-15 02:26:07.841806] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.850543] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.850647] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover max seq ID 00:30:43.099 [2024-12-15 02:26:07.850694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.010 ms 00:30:43.099 [2024-12-15 02:26:07.850716] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:43.099 [2024-12-15 02:26:07.850823] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 262144, seq id 14 00:30:43.099 [2024-12-15 02:26:07.851031] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:43.099 [2024-12-15 02:26:07.851062] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:43.099 [2024-12-15 02:26:07.851083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.210 ms 00:30:43.099 [2024-12-15 02:26:07.851101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.042 [2024-12-15 02:26:08.482857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.042 [2024-12-15 02:26:08.482988] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:44.042 [2024-12-15 02:26:08.483039] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 630.797 ms 00:30:44.042 [2024-12-15 02:26:08.483059] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.042 [2024-12-15 02:26:08.486401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.042 [2024-12-15 02:26:08.486502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:44.042 [2024-12-15 02:26:08.486550] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.052 ms 00:30:44.042 [2024-12-15 02:26:08.486568] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.042 [2024-12-15 02:26:08.487132] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 262144, seq id 14 00:30:44.042 [2024-12-15 02:26:08.487208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.042 [2024-12-15 02:26:08.487278] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:44.042 [2024-12-15 02:26:08.487298] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.563 ms 00:30:44.042 [2024-12-15 02:26:08.487313] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.042 [2024-12-15 02:26:08.487350] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.042 [2024-12-15 02:26:08.487368] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:44.042 [2024-12-15 02:26:08.487384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.004 ms 00:30:44.042 [2024-12-15 02:26:08.487406] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.042 [2024-12-15 02:26:08.487443] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 636.624 ms, result 0 00:30:44.042 [2024-12-15 02:26:08.487540] ftl_nv_cache.c:2274:recover_open_chunk_prepare: *NOTICE*: [FTL][ftl] Start recovery open chunk, offset = 524288, seq id 15 00:30:44.042 [2024-12-15 02:26:08.487727] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.042 [2024-12-15 02:26:08.487778] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, prepare 00:30:44.042 [2024-12-15 02:26:08.487794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.188 ms 00:30:44.042 [2024-12-15 02:26:08.487849] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.305 [2024-12-15 02:26:09.052849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.305 [2024-12-15 02:26:09.053015] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, read vss 00:30:44.305 [2024-12-15 02:26:09.053085] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 564.266 ms 00:30:44.305 [2024-12-15 02:26:09.053106] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.305 [2024-12-15 02:26:09.056640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.305 [2024-12-15 02:26:09.056733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, persist P2L map 00:30:44.305 [2024-12-15 02:26:09.056777] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.188 ms 00:30:44.305 [2024-12-15 02:26:09.056794] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.305 [2024-12-15 02:26:09.057886] ftl_nv_cache.c:2323:recover_open_chunk_close_chunk_cb: *NOTICE*: [FTL][ftl] Recovered chunk, offset = 524288, seq id 15 00:30:44.305 [2024-12-15 02:26:09.058000] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.305 [2024-12-15 02:26:09.058080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, close chunk 00:30:44.305 [2024-12-15 02:26:09.058098] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.174 ms 00:30:44.305 [2024-12-15 02:26:09.058113] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.305 [2024-12-15 02:26:09.058148] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.305 [2024-12-15 02:26:09.058167] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Chunk recovery, cleanup 00:30:44.305 [2024-12-15 02:26:09.058182] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:44.305 [2024-12-15 02:26:09.058208] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.305 [2024-12-15 02:26:09.058248] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'Recover open chunk', duration = 570.705 ms, result 0 00:30:44.305 [2024-12-15 02:26:09.058336] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: full chunks = 2, empty chunks = 2 00:30:44.305 [2024-12-15 02:26:09.058365] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl] FTL NV Cache: state loaded successfully 00:30:44.305 [2024-12-15 02:26:09.058391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.305 [2024-12-15 02:26:09.058406] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Recover open chunks P2L 00:30:44.305 [2024-12-15 02:26:09.058421] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1207.591 ms 00:30:44.305 [2024-12-15 02:26:09.058435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.305 [2024-12-15 02:26:09.058468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.305 [2024-12-15 02:26:09.058490] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize recovery 00:30:44.305 [2024-12-15 02:26:09.058505] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.001 ms 00:30:44.305 [2024-12-15 02:26:09.058520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.067984] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 1 (of 2) MiB 00:30:44.566 [2024-12-15 02:26:09.068073] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.566 [2024-12-15 02:26:09.068081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize L2P 00:30:44.566 [2024-12-15 02:26:09.068089] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.531 ms 00:30:44.566 [2024-12-15 02:26:09.068095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.068651] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.566 [2024-12-15 02:26:09.068666] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore L2P from shared memory 00:30:44.566 [2024-12-15 02:26:09.068676] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.499 ms 00:30:44.566 [2024-12-15 02:26:09.068682] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.070373] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.566 [2024-12-15 02:26:09.070391] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Restore valid maps counters 00:30:44.566 [2024-12-15 02:26:09.070399] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.678 ms 00:30:44.566 [2024-12-15 02:26:09.070405] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.070436] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.566 [2024-12-15 02:26:09.070443] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Complete trim transaction 00:30:44.566 [2024-12-15 02:26:09.070450] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.002 ms 00:30:44.566 [2024-12-15 02:26:09.070459] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.070540] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.566 [2024-12-15 02:26:09.070549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize band initialization 00:30:44.566 [2024-12-15 02:26:09.070556] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.016 ms 00:30:44.566 [2024-12-15 02:26:09.070561] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.070580] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.566 [2024-12-15 02:26:09.070586] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Start core poller 00:30:44.566 [2024-12-15 02:26:09.070593] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.005 ms 00:30:44.566 [2024-12-15 02:26:09.070599] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.070628] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl] Self test skipped 00:30:44.566 [2024-12-15 02:26:09.070636] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.566 [2024-12-15 02:26:09.070643] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Self test on startup 00:30:44.566 [2024-12-15 02:26:09.070649] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.009 ms 00:30:44.566 [2024-12-15 02:26:09.070656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.070697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:44.566 [2024-12-15 02:26:09.070705] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finalize initialization 00:30:44.566 [2024-12-15 02:26:09.070711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.027 ms 00:30:44.566 [2024-12-15 02:26:09.070717] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:44.566 [2024-12-15 02:26:09.071714] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL startup', duration = 1475.692 ms, result 0 00:30:44.566 [2024-12-15 02:26:09.084343] tcp.c: 756:nvmf_tcp_create: *NOTICE*: *** TCP Transport Init *** 00:30:44.566 [2024-12-15 02:26:09.100339] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl] FTL IO channel created on nvmf_tgt_poll_group_000 00:30:44.566 [2024-12-15 02:26:09.108478] tcp.c:1099:nvmf_tcp_listen: *NOTICE*: *** NVMe/TCP Target Listening on 127.0.0.1 port 4420 *** 00:30:44.566 Validate MD5 checksum, iteration 1 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@868 -- # return 0 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@93 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json ]] 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@95 -- # return 0 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@116 -- # test_validate_checksum 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@96 -- # skip=0 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i = 0 )) 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 1' 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:44.566 02:26:09 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=0 00:30:44.566 [2024-12-15 02:26:09.198049] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:44.566 [2024-12-15 02:26:09.198250] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85528 ] 00:30:44.827 [2024-12-15 02:26:09.349841] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:44.827 [2024-12-15 02:26:09.460228] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:46.745  [2024-12-15T02:26:12.081Z] Copying: 628/1024 [MB] (628 MBps) [2024-12-15T02:26:14.622Z] Copying: 1024/1024 [MB] (average 573 MBps) 00:30:49.857 00:30:49.857 02:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=1024 00:30:49.857 02:26:14 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:51.758 Validate MD5 checksum, iteration 2 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=f753765335ac2ad3ff91c62765a66e84 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ f753765335ac2ad3ff91c62765a66e84 != \f\7\5\3\7\6\5\3\3\5\a\c\2\a\d\3\f\f\9\1\c\6\2\7\6\5\a\6\6\e\8\4 ]] 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@98 -- # echo 'Validate MD5 checksum, iteration 2' 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@99 -- # tcp_dd --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@198 -- # tcp_initiator_setup 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@151 -- # local 'rpc=/home/vagrant/spdk_repo/spdk/scripts/rpc.py -s /var/tmp/spdk.tgt.sock' 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@153 -- # [[ -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json ]] 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@154 -- # return 0 00:30:51.758 02:26:16 ftl.ftl_upgrade_shutdown -- ftl/common.sh@199 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd '--cpumask=[1]' --rpc-socket=/var/tmp/spdk.tgt.sock --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json --ib=ftln1 --of=/home/vagrant/spdk_repo/spdk/test/ftl/file --bs=1048576 --count=1024 --qd=2 --skip=1024 00:30:51.759 [2024-12-15 02:26:16.280603] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:30:51.759 [2024-12-15 02:26:16.280713] [ DPDK EAL parameters: spdk_dd --no-shconf -l 1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85602 ] 00:30:51.759 [2024-12-15 02:26:16.436423] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:30:52.017 [2024-12-15 02:26:16.525813] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 1 00:30:53.445  [2024-12-15T02:26:18.783Z] Copying: 656/1024 [MB] (656 MBps) [2024-12-15T02:26:20.695Z] Copying: 1024/1024 [MB] (average 662 MBps) 00:30:55.930 00:30:55.930 02:26:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@100 -- # skip=2048 00:30:55.930 02:26:20 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@102 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # cut -f1 '-d ' 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@103 -- # sum=32689d92d1969d5b321580cf4eab64db 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@105 -- # [[ 32689d92d1969d5b321580cf4eab64db != \3\2\6\8\9\d\9\2\d\1\9\6\9\d\5\b\3\2\1\5\8\0\c\f\4\e\a\b\6\4\d\b ]] 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i++ )) 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@97 -- # (( i < iterations )) 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@118 -- # trap - SIGINT SIGTERM EXIT 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@119 -- # cleanup 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@11 -- # trap - SIGINT SIGTERM EXIT 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@12 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@13 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/file.md5 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@14 -- # tcp_cleanup 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@193 -- # tcp_target_cleanup 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@144 -- # tcp_target_shutdown 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@130 -- # [[ -n 85488 ]] 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- ftl/common.sh@131 -- # killprocess 85488 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@954 -- # '[' -z 85488 ']' 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@958 -- # kill -0 85488 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # uname 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85488 00:30:58.463 killing process with pid 85488 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85488' 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@973 -- # kill 85488 00:30:58.463 02:26:22 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@978 -- # wait 85488 00:30:58.723 [2024-12-15 02:26:23.425853] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on nvmf_tgt_poll_group_000 00:30:58.723 [2024-12-15 02:26:23.437468] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.437501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinit core IO channel 00:30:58.723 [2024-12-15 02:26:23.437511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.003 ms 00:30:58.723 [2024-12-15 02:26:23.437517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.437534] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl] FTL IO channel destroy on app_thread 00:30:58.723 [2024-12-15 02:26:23.439583] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.439606] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Unregister IO device 00:30:58.723 [2024-12-15 02:26:23.439618] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 2.038 ms 00:30:58.723 [2024-12-15 02:26:23.439626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.439801] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.439809] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Stop core poller 00:30:58.723 [2024-12-15 02:26:23.439815] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.159 ms 00:30:58.723 [2024-12-15 02:26:23.439821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.440902] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.441011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist L2P 00:30:58.723 [2024-12-15 02:26:23.441023] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 1.069 ms 00:30:58.723 [2024-12-15 02:26:23.441033] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.441896] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.441910] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Finish L2P trims 00:30:58.723 [2024-12-15 02:26:23.441918] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.839 ms 00:30:58.723 [2024-12-15 02:26:23.441924] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.449424] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.449449] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist NV cache metadata 00:30:58.723 [2024-12-15 02:26:23.449456] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.466 ms 00:30:58.723 [2024-12-15 02:26:23.449466] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.453703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.453728] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist valid map metadata 00:30:58.723 [2024-12-15 02:26:23.453736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 4.212 ms 00:30:58.723 [2024-12-15 02:26:23.453743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.453798] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.453805] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist P2L metadata 00:30:58.723 [2024-12-15 02:26:23.453811] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.029 ms 00:30:58.723 [2024-12-15 02:26:23.453821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.460933] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.460957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist band info metadata 00:30:58.723 [2024-12-15 02:26:23.460964] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.100 ms 00:30:58.723 [2024-12-15 02:26:23.460969] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.467958] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.468058] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist trim metadata 00:30:58.723 [2024-12-15 02:26:23.468070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.966 ms 00:30:58.723 [2024-12-15 02:26:23.468075] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.475079] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.475170] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Persist superblock 00:30:58.723 [2024-12-15 02:26:23.475181] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 6.981 ms 00:30:58.723 [2024-12-15 02:26:23.475186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.482310] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.723 [2024-12-15 02:26:23.482333] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Set FTL clean state 00:30:58.723 [2024-12-15 02:26:23.482340] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 7.070 ms 00:30:58.723 [2024-12-15 02:26:23.482345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.723 [2024-12-15 02:26:23.482368] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Bands validity: 00:30:58.723 [2024-12-15 02:26:23.482379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 1: 261120 / 261120 wr_cnt: 1 state: closed 00:30:58.723 [2024-12-15 02:26:23.482386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 2: 261120 / 261120 wr_cnt: 1 state: closed 00:30:58.723 [2024-12-15 02:26:23.482392] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 3: 2048 / 261120 wr_cnt: 1 state: closed 00:30:58.723 [2024-12-15 02:26:23.482399] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482410] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482422] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482427] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482438] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482444] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482450] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482455] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482461] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482466] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482477] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:30:58.723 [2024-12-15 02:26:23.482484] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] 00:30:58.723 [2024-12-15 02:26:23.482490] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] device UUID: f0c11f49-e1c8-4c0f-9224-6a2fd4d8c2d2 00:30:58.723 [2024-12-15 02:26:23.482495] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total valid LBAs: 524288 00:30:58.723 [2024-12-15 02:26:23.482501] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] total writes: 320 00:30:58.723 [2024-12-15 02:26:23.482506] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] user writes: 0 00:30:58.723 [2024-12-15 02:26:23.482511] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] WAF: inf 00:30:58.724 [2024-12-15 02:26:23.482516] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] limits: 00:30:58.724 [2024-12-15 02:26:23.482522] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] crit: 0 00:30:58.724 [2024-12-15 02:26:23.482531] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] high: 0 00:30:58.724 [2024-12-15 02:26:23.482536] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] low: 0 00:30:58.724 [2024-12-15 02:26:23.482540] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl] start: 0 00:30:58.724 [2024-12-15 02:26:23.482546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.724 [2024-12-15 02:26:23.482552] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Dump statistics 00:30:58.724 [2024-12-15 02:26:23.482560] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.179 ms 00:30:58.724 [2024-12-15 02:26:23.482565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.492051] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.983 [2024-12-15 02:26:23.492074] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize L2P 00:30:58.983 [2024-12-15 02:26:23.492082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 9.466 ms 00:30:58.983 [2024-12-15 02:26:23.492088] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.492372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Action 00:30:58.983 [2024-12-15 02:26:23.492383] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Deinitialize P2L checkpointing 00:30:58.983 [2024-12-15 02:26:23.492389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.266 ms 00:30:58.983 [2024-12-15 02:26:23.492395] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.524975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.525002] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize reloc 00:30:58.983 [2024-12-15 02:26:23.525011] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.525020] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.525042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.525048] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands metadata 00:30:58.983 [2024-12-15 02:26:23.525055] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.525060] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.525110] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.525118] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize trim map 00:30:58.983 [2024-12-15 02:26:23.525124] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.525130] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.525145] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.525151] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize valid map 00:30:58.983 [2024-12-15 02:26:23.525157] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.525163] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.583126] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.583165] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize NV cache 00:30:58.983 [2024-12-15 02:26:23.583173] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.583179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.630918] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.630948] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize metadata 00:30:58.983 [2024-12-15 02:26:23.630956] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.630963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.631022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.631030] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize core IO channel 00:30:58.983 [2024-12-15 02:26:23.631036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.631043] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.631074] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.631089] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize bands 00:30:58.983 [2024-12-15 02:26:23.631096] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.631101] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.631173] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.631180] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize memory pools 00:30:58.983 [2024-12-15 02:26:23.631186] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.631192] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.983 [2024-12-15 02:26:23.631232] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.983 [2024-12-15 02:26:23.631239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Initialize superblock 00:30:58.983 [2024-12-15 02:26:23.631247] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.983 [2024-12-15 02:26:23.631252] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.984 [2024-12-15 02:26:23.631280] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.984 [2024-12-15 02:26:23.631287] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open cache bdev 00:30:58.984 [2024-12-15 02:26:23.631293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.984 [2024-12-15 02:26:23.631299] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.984 [2024-12-15 02:26:23.631330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl] Rollback 00:30:58.984 [2024-12-15 02:26:23.631339] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl] name: Open base bdev 00:30:58.984 [2024-12-15 02:26:23.631345] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl] duration: 0.000 ms 00:30:58.984 [2024-12-15 02:26:23.631351] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl] status: 0 00:30:58.984 [2024-12-15 02:26:23.631442] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl] Management process finished, name 'FTL shutdown', duration = 193.952 ms, result 0 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@132 -- # unset spdk_tgt_pid 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@145 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@194 -- # tcp_initiator_cleanup 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@188 -- # tcp_initiator_shutdown 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@181 -- # [[ -n '' ]] 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@189 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:59.553 Remove shared memory files 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/upgrade_shutdown.sh@15 -- # remove_shm 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@204 -- # echo Remove shared memory files 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@205 -- # rm -f rm -f 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@206 -- # rm -f rm -f 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@207 -- # rm -f rm -f /dev/shm/spdk_tgt_trace.pid85271 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- ftl/common.sh@209 -- # rm -f rm -f 00:30:59.553 ************************************ 00:30:59.553 END TEST ftl_upgrade_shutdown 00:30:59.553 ************************************ 00:30:59.553 00:30:59.553 real 1m26.286s 00:30:59.553 user 1m57.910s 00:30:59.553 sys 0m20.004s 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@1130 -- # xtrace_disable 00:30:59.553 02:26:24 ftl.ftl_upgrade_shutdown -- common/autotest_common.sh@10 -- # set +x 00:30:59.812 02:26:24 ftl -- ftl/ftl.sh@80 -- # [[ 1 -eq 1 ]] 00:30:59.812 02:26:24 ftl -- ftl/ftl.sh@81 -- # run_test ftl_restore_fast /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:59.812 02:26:24 ftl -- common/autotest_common.sh@1105 -- # '[' 6 -le 1 ']' 00:30:59.812 02:26:24 ftl -- common/autotest_common.sh@1111 -- # xtrace_disable 00:30:59.812 02:26:24 ftl -- common/autotest_common.sh@10 -- # set +x 00:30:59.812 ************************************ 00:30:59.812 START TEST ftl_restore_fast 00:30:59.812 ************************************ 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1129 -- # /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh -f -c 0000:00:10.0 0000:00:11.0 00:30:59.812 * Looking for test storage... 00:30:59.812 * Found test storage at /home/vagrant/spdk_repo/spdk/test/ftl 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1710 -- # [[ y == y ]] 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # awk '{print $NF}' 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lcov --version 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1711 -- # lt 1.15 2 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@373 -- # cmp_versions 1.15 '<' 2 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@333 -- # local ver1 ver1_l 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@334 -- # local ver2 ver2_l 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # IFS=.-: 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@336 -- # read -ra ver1 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # IFS=.-: 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@337 -- # read -ra ver2 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@338 -- # local 'op=<' 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@340 -- # ver1_l=2 00:30:59.812 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@341 -- # ver2_l=1 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@343 -- # local lt=0 gt=0 eq=0 v 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@344 -- # case "$op" in 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@345 -- # : 1 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v = 0 )) 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@364 -- # (( v < (ver1_l > ver2_l ? ver1_l : ver2_l) )) 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # decimal 1 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=1 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 1 =~ ^[0-9]+$ ]] 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 1 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@365 -- # ver1[v]=1 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # decimal 2 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@353 -- # local d=2 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@354 -- # [[ 2 =~ ^[0-9]+$ ]] 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@355 -- # echo 2 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@366 -- # ver2[v]=2 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@367 -- # (( ver1[v] > ver2[v] )) 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # (( ver1[v] < ver2[v] )) 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- scripts/common.sh@368 -- # return 0 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1712 -- # lcov_rc_opt='--rc lcov_branch_coverage=1 --rc lcov_function_coverage=1' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # export 'LCOV_OPTS= 00:30:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.813 --rc genhtml_branch_coverage=1 00:30:59.813 --rc genhtml_function_coverage=1 00:30:59.813 --rc genhtml_legend=1 00:30:59.813 --rc geninfo_all_blocks=1 00:30:59.813 --rc geninfo_unexecuted_blocks=1 00:30:59.813 00:30:59.813 ' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1724 -- # LCOV_OPTS=' 00:30:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.813 --rc genhtml_branch_coverage=1 00:30:59.813 --rc genhtml_function_coverage=1 00:30:59.813 --rc genhtml_legend=1 00:30:59.813 --rc geninfo_all_blocks=1 00:30:59.813 --rc geninfo_unexecuted_blocks=1 00:30:59.813 00:30:59.813 ' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # export 'LCOV=lcov 00:30:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.813 --rc genhtml_branch_coverage=1 00:30:59.813 --rc genhtml_function_coverage=1 00:30:59.813 --rc genhtml_legend=1 00:30:59.813 --rc geninfo_all_blocks=1 00:30:59.813 --rc geninfo_unexecuted_blocks=1 00:30:59.813 00:30:59.813 ' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@1725 -- # LCOV='lcov 00:30:59.813 --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 00:30:59.813 --rc genhtml_branch_coverage=1 00:30:59.813 --rc genhtml_function_coverage=1 00:30:59.813 --rc genhtml_legend=1 00:30:59.813 --rc geninfo_all_blocks=1 00:30:59.813 --rc geninfo_unexecuted_blocks=1 00:30:59.813 00:30:59.813 ' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@9 -- # source /home/vagrant/spdk_repo/spdk/test/ftl/common.sh 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # dirname /home/vagrant/spdk_repo/spdk/test/ftl/restore.sh 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@8 -- # testdir=/home/vagrant/spdk_repo/spdk/test/ftl 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # readlink -f /home/vagrant/spdk_repo/spdk/test/ftl/../.. 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@9 -- # rootdir=/home/vagrant/spdk_repo/spdk 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@10 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # export 'ftl_tgt_core_mask=[0]' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@12 -- # ftl_tgt_core_mask='[0]' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # export spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@14 -- # spdk_tgt_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # export 'spdk_tgt_cpumask=[0]' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@15 -- # spdk_tgt_cpumask='[0]' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # export spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@16 -- # spdk_tgt_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/tgt.json 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # export spdk_tgt_pid= 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@17 -- # spdk_tgt_pid= 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # export spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@19 -- # spdk_ini_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # export 'spdk_ini_cpumask=[1]' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@20 -- # spdk_ini_cpumask='[1]' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # export spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@21 -- # spdk_ini_rpc=/var/tmp/spdk.tgt.sock 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # export spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@22 -- # spdk_ini_cnfg=/home/vagrant/spdk_repo/spdk/test/ftl/config/ini.json 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # export spdk_ini_pid= 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@23 -- # spdk_ini_pid= 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # export spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/common.sh@25 -- # spdk_dd_bin=/home/vagrant/spdk_repo/spdk/build/bin/spdk_dd 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@11 -- # rpc_py=/home/vagrant/spdk_repo/spdk/scripts/rpc.py 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mktemp -d 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@13 -- # mount_dir=/tmp/tmp.Zq5WcEmNZq 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@19 -- # fast_shutdown=1 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@16 -- # case $opt in 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@18 -- # nv_cache=0000:00:10.0 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@15 -- # getopts :u:c:f opt 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@23 -- # shift 3 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@24 -- # device=0000:00:11.0 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@25 -- # timeout=240 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@36 -- # trap 'restore_kill; exit 1' SIGINT SIGTERM EXIT 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@39 -- # svcpid=85766 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@41 -- # waitforlisten 85766 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- ftl/restore.sh@38 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@835 -- # '[' -z 85766 ']' 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@840 -- # local max_retries=100 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:30:59.813 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@844 -- # xtrace_disable 00:30:59.813 02:26:24 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:31:00.072 [2024-12-15 02:26:24.598071] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:00.072 [2024-12-15 02:26:24.598328] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85766 ] 00:31:00.072 [2024-12-15 02:26:24.754701] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:00.072 [2024-12-15 02:26:24.832121] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:00.640 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:31:00.640 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@868 -- # return 0 00:31:00.640 02:26:25 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # create_base_bdev nvme0 0000:00:11.0 103424 00:31:00.640 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@54 -- # local name=nvme0 00:31:00.640 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@55 -- # local base_bdf=0000:00:11.0 00:31:00.640 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@56 -- # local size=103424 00:31:00.640 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@59 -- # local base_bdev 00:31:00.640 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:31:00.897 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@60 -- # base_bdev=nvme0n1 00:31:00.897 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@62 -- # local base_size 00:31:00.897 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # get_bdev_size nvme0n1 00:31:00.897 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=nvme0n1 00:31:00.897 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:00.897 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:00.897 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:00.897 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b nvme0n1 00:31:01.155 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:01.155 { 00:31:01.155 "name": "nvme0n1", 00:31:01.155 "aliases": [ 00:31:01.155 "7ca263a9-e6ae-4190-b18e-814527878439" 00:31:01.155 ], 00:31:01.155 "product_name": "NVMe disk", 00:31:01.155 "block_size": 4096, 00:31:01.155 "num_blocks": 1310720, 00:31:01.155 "uuid": "7ca263a9-e6ae-4190-b18e-814527878439", 00:31:01.155 "numa_id": -1, 00:31:01.155 "assigned_rate_limits": { 00:31:01.156 "rw_ios_per_sec": 0, 00:31:01.156 "rw_mbytes_per_sec": 0, 00:31:01.156 "r_mbytes_per_sec": 0, 00:31:01.156 "w_mbytes_per_sec": 0 00:31:01.156 }, 00:31:01.156 "claimed": true, 00:31:01.156 "claim_type": "read_many_write_one", 00:31:01.156 "zoned": false, 00:31:01.156 "supported_io_types": { 00:31:01.156 "read": true, 00:31:01.156 "write": true, 00:31:01.156 "unmap": true, 00:31:01.156 "flush": true, 00:31:01.156 "reset": true, 00:31:01.156 "nvme_admin": true, 00:31:01.156 "nvme_io": true, 00:31:01.156 "nvme_io_md": false, 00:31:01.156 "write_zeroes": true, 00:31:01.156 "zcopy": false, 00:31:01.156 "get_zone_info": false, 00:31:01.156 "zone_management": false, 00:31:01.156 "zone_append": false, 00:31:01.156 "compare": true, 00:31:01.156 "compare_and_write": false, 00:31:01.156 "abort": true, 00:31:01.156 "seek_hole": false, 00:31:01.156 "seek_data": false, 00:31:01.156 "copy": true, 00:31:01.156 "nvme_iov_md": false 00:31:01.156 }, 00:31:01.156 "driver_specific": { 00:31:01.156 "nvme": [ 00:31:01.156 { 00:31:01.156 "pci_address": "0000:00:11.0", 00:31:01.156 "trid": { 00:31:01.156 "trtype": "PCIe", 00:31:01.156 "traddr": "0000:00:11.0" 00:31:01.156 }, 00:31:01.156 "ctrlr_data": { 00:31:01.156 "cntlid": 0, 00:31:01.156 "vendor_id": "0x1b36", 00:31:01.156 "model_number": "QEMU NVMe Ctrl", 00:31:01.156 "serial_number": "12341", 00:31:01.156 "firmware_revision": "8.0.0", 00:31:01.156 "subnqn": "nqn.2019-08.org.qemu:12341", 00:31:01.156 "oacs": { 00:31:01.156 "security": 0, 00:31:01.156 "format": 1, 00:31:01.156 "firmware": 0, 00:31:01.156 "ns_manage": 1 00:31:01.156 }, 00:31:01.156 "multi_ctrlr": false, 00:31:01.156 "ana_reporting": false 00:31:01.156 }, 00:31:01.156 "vs": { 00:31:01.156 "nvme_version": "1.4" 00:31:01.156 }, 00:31:01.156 "ns_data": { 00:31:01.156 "id": 1, 00:31:01.156 "can_share": false 00:31:01.156 } 00:31:01.156 } 00:31:01.156 ], 00:31:01.156 "mp_policy": "active_passive" 00:31:01.156 } 00:31:01.156 } 00:31:01.156 ]' 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=1310720 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=5120 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 5120 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@63 -- # base_size=5120 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@64 -- # [[ 103424 -le 5120 ]] 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@67 -- # clear_lvols 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:31:01.156 02:26:25 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:31:01.415 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@28 -- # stores=8746e643-1576-460d-8e8d-19e56ee9f866 00:31:01.415 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@29 -- # for lvs in $stores 00:31:01.415 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8746e643-1576-460d-8e8d-19e56ee9f866 00:31:01.672 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create_lvstore nvme0n1 lvs 00:31:01.930 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@68 -- # lvs=8f6c236a-27f0-465d-80b3-b21d1c704e4d 00:31:01.930 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@69 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_create nvme0n1p0 103424 -t -u 8f6c236a-27f0-465d-80b3-b21d1c704e4d 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- ftl/restore.sh@43 -- # split_bdev=aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- ftl/restore.sh@44 -- # '[' -n 0000:00:10.0 ']' 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # create_nv_cache_bdev nvc0 0000:00:10.0 aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@35 -- # local name=nvc0 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@36 -- # local cache_bdf=0000:00:10.0 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@37 -- # local base_bdev=aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@38 -- # local cache_size= 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # get_bdev_size aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:02.189 { 00:31:02.189 "name": "aebeae74-2243-4059-95df-aea8ac8c590a", 00:31:02.189 "aliases": [ 00:31:02.189 "lvs/nvme0n1p0" 00:31:02.189 ], 00:31:02.189 "product_name": "Logical Volume", 00:31:02.189 "block_size": 4096, 00:31:02.189 "num_blocks": 26476544, 00:31:02.189 "uuid": "aebeae74-2243-4059-95df-aea8ac8c590a", 00:31:02.189 "assigned_rate_limits": { 00:31:02.189 "rw_ios_per_sec": 0, 00:31:02.189 "rw_mbytes_per_sec": 0, 00:31:02.189 "r_mbytes_per_sec": 0, 00:31:02.189 "w_mbytes_per_sec": 0 00:31:02.189 }, 00:31:02.189 "claimed": false, 00:31:02.189 "zoned": false, 00:31:02.189 "supported_io_types": { 00:31:02.189 "read": true, 00:31:02.189 "write": true, 00:31:02.189 "unmap": true, 00:31:02.189 "flush": false, 00:31:02.189 "reset": true, 00:31:02.189 "nvme_admin": false, 00:31:02.189 "nvme_io": false, 00:31:02.189 "nvme_io_md": false, 00:31:02.189 "write_zeroes": true, 00:31:02.189 "zcopy": false, 00:31:02.189 "get_zone_info": false, 00:31:02.189 "zone_management": false, 00:31:02.189 "zone_append": false, 00:31:02.189 "compare": false, 00:31:02.189 "compare_and_write": false, 00:31:02.189 "abort": false, 00:31:02.189 "seek_hole": true, 00:31:02.189 "seek_data": true, 00:31:02.189 "copy": false, 00:31:02.189 "nvme_iov_md": false 00:31:02.189 }, 00:31:02.189 "driver_specific": { 00:31:02.189 "lvol": { 00:31:02.189 "lvol_store_uuid": "8f6c236a-27f0-465d-80b3-b21d1c704e4d", 00:31:02.189 "base_bdev": "nvme0n1", 00:31:02.189 "thin_provision": true, 00:31:02.189 "num_allocated_clusters": 0, 00:31:02.189 "snapshot": false, 00:31:02.189 "clone": false, 00:31:02.189 "esnap_clone": false 00:31:02.189 } 00:31:02.189 } 00:31:02.189 } 00:31:02.189 ]' 00:31:02.189 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:02.448 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:02.448 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:02.448 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:02.448 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:02.448 02:26:26 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:02.448 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@41 -- # local base_size=5171 00:31:02.448 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@44 -- # local nvc_bdev 00:31:02.448 02:26:26 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvc0 -t PCIe -a 0000:00:10.0 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- ftl/common.sh@45 -- # nvc_bdev=nvc0n1 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- ftl/common.sh@47 -- # [[ -z '' ]] 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # get_bdev_size aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:02.707 { 00:31:02.707 "name": "aebeae74-2243-4059-95df-aea8ac8c590a", 00:31:02.707 "aliases": [ 00:31:02.707 "lvs/nvme0n1p0" 00:31:02.707 ], 00:31:02.707 "product_name": "Logical Volume", 00:31:02.707 "block_size": 4096, 00:31:02.707 "num_blocks": 26476544, 00:31:02.707 "uuid": "aebeae74-2243-4059-95df-aea8ac8c590a", 00:31:02.707 "assigned_rate_limits": { 00:31:02.707 "rw_ios_per_sec": 0, 00:31:02.707 "rw_mbytes_per_sec": 0, 00:31:02.707 "r_mbytes_per_sec": 0, 00:31:02.707 "w_mbytes_per_sec": 0 00:31:02.707 }, 00:31:02.707 "claimed": false, 00:31:02.707 "zoned": false, 00:31:02.707 "supported_io_types": { 00:31:02.707 "read": true, 00:31:02.707 "write": true, 00:31:02.707 "unmap": true, 00:31:02.707 "flush": false, 00:31:02.707 "reset": true, 00:31:02.707 "nvme_admin": false, 00:31:02.707 "nvme_io": false, 00:31:02.707 "nvme_io_md": false, 00:31:02.707 "write_zeroes": true, 00:31:02.707 "zcopy": false, 00:31:02.707 "get_zone_info": false, 00:31:02.707 "zone_management": false, 00:31:02.707 "zone_append": false, 00:31:02.707 "compare": false, 00:31:02.707 "compare_and_write": false, 00:31:02.707 "abort": false, 00:31:02.707 "seek_hole": true, 00:31:02.707 "seek_data": true, 00:31:02.707 "copy": false, 00:31:02.707 "nvme_iov_md": false 00:31:02.707 }, 00:31:02.707 "driver_specific": { 00:31:02.707 "lvol": { 00:31:02.707 "lvol_store_uuid": "8f6c236a-27f0-465d-80b3-b21d1c704e4d", 00:31:02.707 "base_bdev": "nvme0n1", 00:31:02.707 "thin_provision": true, 00:31:02.707 "num_allocated_clusters": 0, 00:31:02.707 "snapshot": false, 00:31:02.707 "clone": false, 00:31:02.707 "esnap_clone": false 00:31:02.707 } 00:31:02.707 } 00:31:02.707 } 00:31:02.707 ]' 00:31:02.707 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- ftl/common.sh@48 -- # cache_size=5171 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- ftl/common.sh@50 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_split_create nvc0n1 -s 5171 1 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@45 -- # nvc_bdev=nvc0n1p0 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # get_bdev_size aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1382 -- # local bdev_name=aebeae74-2243-4059-95df-aea8ac8c590a 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1383 -- # local bdev_info 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1384 -- # local bs 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1385 -- # local nb 00:31:02.966 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_get_bdevs -b aebeae74-2243-4059-95df-aea8ac8c590a 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1386 -- # bdev_info='[ 00:31:03.224 { 00:31:03.224 "name": "aebeae74-2243-4059-95df-aea8ac8c590a", 00:31:03.224 "aliases": [ 00:31:03.224 "lvs/nvme0n1p0" 00:31:03.224 ], 00:31:03.224 "product_name": "Logical Volume", 00:31:03.224 "block_size": 4096, 00:31:03.224 "num_blocks": 26476544, 00:31:03.224 "uuid": "aebeae74-2243-4059-95df-aea8ac8c590a", 00:31:03.224 "assigned_rate_limits": { 00:31:03.224 "rw_ios_per_sec": 0, 00:31:03.224 "rw_mbytes_per_sec": 0, 00:31:03.224 "r_mbytes_per_sec": 0, 00:31:03.224 "w_mbytes_per_sec": 0 00:31:03.224 }, 00:31:03.224 "claimed": false, 00:31:03.224 "zoned": false, 00:31:03.224 "supported_io_types": { 00:31:03.224 "read": true, 00:31:03.224 "write": true, 00:31:03.224 "unmap": true, 00:31:03.224 "flush": false, 00:31:03.224 "reset": true, 00:31:03.224 "nvme_admin": false, 00:31:03.224 "nvme_io": false, 00:31:03.224 "nvme_io_md": false, 00:31:03.224 "write_zeroes": true, 00:31:03.224 "zcopy": false, 00:31:03.224 "get_zone_info": false, 00:31:03.224 "zone_management": false, 00:31:03.224 "zone_append": false, 00:31:03.224 "compare": false, 00:31:03.224 "compare_and_write": false, 00:31:03.224 "abort": false, 00:31:03.224 "seek_hole": true, 00:31:03.224 "seek_data": true, 00:31:03.224 "copy": false, 00:31:03.224 "nvme_iov_md": false 00:31:03.224 }, 00:31:03.224 "driver_specific": { 00:31:03.224 "lvol": { 00:31:03.224 "lvol_store_uuid": "8f6c236a-27f0-465d-80b3-b21d1c704e4d", 00:31:03.224 "base_bdev": "nvme0n1", 00:31:03.224 "thin_provision": true, 00:31:03.224 "num_allocated_clusters": 0, 00:31:03.224 "snapshot": false, 00:31:03.224 "clone": false, 00:31:03.224 "esnap_clone": false 00:31:03.224 } 00:31:03.224 } 00:31:03.224 } 00:31:03.224 ]' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # jq '.[] .block_size' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1387 -- # bs=4096 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # jq '.[] .num_blocks' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1388 -- # nb=26476544 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1391 -- # bdev_size=103424 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- common/autotest_common.sh@1392 -- # echo 103424 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@48 -- # l2p_dram_size_mb=10 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@49 -- # ftl_construct_args='bdev_ftl_create -b ftl0 -d aebeae74-2243-4059-95df-aea8ac8c590a --l2p_dram_limit 10' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@51 -- # '[' -n '' ']' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # '[' -n 0000:00:10.0 ']' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@52 -- # ftl_construct_args+=' -c nvc0n1p0' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@54 -- # '[' 1 -eq 1 ']' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@55 -- # ftl_construct_args+=' --fast-shutdown' 00:31:03.224 02:26:27 ftl.ftl_restore_fast -- ftl/restore.sh@58 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py -t 240 bdev_ftl_create -b ftl0 -d aebeae74-2243-4059-95df-aea8ac8c590a --l2p_dram_limit 10 -c nvc0n1p0 --fast-shutdown 00:31:03.484 [2024-12-15 02:26:28.153519] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.153558] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:03.484 [2024-12-15 02:26:28.153570] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:03.484 [2024-12-15 02:26:28.153576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.153617] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.153624] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:03.484 [2024-12-15 02:26:28.153632] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.027 ms 00:31:03.484 [2024-12-15 02:26:28.153639] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.153658] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:03.484 [2024-12-15 02:26:28.154192] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:03.484 [2024-12-15 02:26:28.154221] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.154228] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:03.484 [2024-12-15 02:26:28.154236] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.567 ms 00:31:03.484 [2024-12-15 02:26:28.154242] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.154302] mngt/ftl_mngt_md.c: 570:ftl_mngt_superblock_init: *NOTICE*: [FTL][ftl0] Create new FTL, UUID 4a227f38-bb55-495b-8e3b-2041cd8dcaa2 00:31:03.484 [2024-12-15 02:26:28.155226] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.155257] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Default-initialize superblock 00:31:03.484 [2024-12-15 02:26:28.155264] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.017 ms 00:31:03.484 [2024-12-15 02:26:28.155272] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.159965] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.159995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:03.484 [2024-12-15 02:26:28.160002] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.660 ms 00:31:03.484 [2024-12-15 02:26:28.160011] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.160076] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.160085] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:03.484 [2024-12-15 02:26:28.160091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:31:03.484 [2024-12-15 02:26:28.160104] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.160135] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.160144] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:03.484 [2024-12-15 02:26:28.160150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.010 ms 00:31:03.484 [2024-12-15 02:26:28.160159] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.160173] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:03.484 [2024-12-15 02:26:28.163043] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.163067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:03.484 [2024-12-15 02:26:28.163076] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.870 ms 00:31:03.484 [2024-12-15 02:26:28.163082] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.163109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.163115] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:03.484 [2024-12-15 02:26:28.163122] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.009 ms 00:31:03.484 [2024-12-15 02:26:28.163128] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.163152] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 1 00:31:03.484 [2024-12-15 02:26:28.163267] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:03.484 [2024-12-15 02:26:28.163279] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:03.484 [2024-12-15 02:26:28.163288] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:03.484 [2024-12-15 02:26:28.163297] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:03.484 [2024-12-15 02:26:28.163304] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:03.484 [2024-12-15 02:26:28.163311] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:03.484 [2024-12-15 02:26:28.163317] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:03.484 [2024-12-15 02:26:28.163326] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:03.484 [2024-12-15 02:26:28.163332] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:03.484 [2024-12-15 02:26:28.163339] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.163349] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:03.484 [2024-12-15 02:26:28.163356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.189 ms 00:31:03.484 [2024-12-15 02:26:28.163362] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.484 [2024-12-15 02:26:28.163428] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.484 [2024-12-15 02:26:28.163434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:03.484 [2024-12-15 02:26:28.163441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.053 ms 00:31:03.485 [2024-12-15 02:26:28.163446] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.485 [2024-12-15 02:26:28.163521] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:03.485 [2024-12-15 02:26:28.163528] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:03.485 [2024-12-15 02:26:28.163536] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163541] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163549] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:03.485 [2024-12-15 02:26:28.163554] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163560] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163565] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:03.485 [2024-12-15 02:26:28.163571] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163576] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:03.485 [2024-12-15 02:26:28.163584] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:03.485 [2024-12-15 02:26:28.163590] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:03.485 [2024-12-15 02:26:28.163596] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:03.485 [2024-12-15 02:26:28.163602] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:03.485 [2024-12-15 02:26:28.163609] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:03.485 [2024-12-15 02:26:28.163614] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163622] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:03.485 [2024-12-15 02:26:28.163627] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163633] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163638] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:03.485 [2024-12-15 02:26:28.163645] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163656] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:03.485 [2024-12-15 02:26:28.163660] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163667] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163672] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:03.485 [2024-12-15 02:26:28.163678] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163683] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163689] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:03.485 [2024-12-15 02:26:28.163693] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163699] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163704] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:03.485 [2024-12-15 02:26:28.163712] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163717] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:03.485 [2024-12-15 02:26:28.163723] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:03.485 [2024-12-15 02:26:28.163728] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:03.485 [2024-12-15 02:26:28.163735] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:03.485 [2024-12-15 02:26:28.163740] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:03.485 [2024-12-15 02:26:28.163747] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:03.485 [2024-12-15 02:26:28.163751] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163758] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:03.485 [2024-12-15 02:26:28.163763] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:03.485 [2024-12-15 02:26:28.163769] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163774] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:03.485 [2024-12-15 02:26:28.163781] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:03.485 [2024-12-15 02:26:28.163786] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163793] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:03.485 [2024-12-15 02:26:28.163800] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:03.485 [2024-12-15 02:26:28.163807] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:03.485 [2024-12-15 02:26:28.163812] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:03.485 [2024-12-15 02:26:28.163819] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:03.485 [2024-12-15 02:26:28.163824] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:03.485 [2024-12-15 02:26:28.163830] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:03.485 [2024-12-15 02:26:28.163836] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:03.485 [2024-12-15 02:26:28.163844] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:03.485 [2024-12-15 02:26:28.163852] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:03.485 [2024-12-15 02:26:28.163859] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:03.485 [2024-12-15 02:26:28.163864] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:03.485 [2024-12-15 02:26:28.163871] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:03.485 [2024-12-15 02:26:28.163876] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:03.485 [2024-12-15 02:26:28.163883] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:03.485 [2024-12-15 02:26:28.163888] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:03.485 [2024-12-15 02:26:28.163896] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:03.485 [2024-12-15 02:26:28.163901] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:03.485 [2024-12-15 02:26:28.163909] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:03.485 [2024-12-15 02:26:28.163914] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:03.485 [2024-12-15 02:26:28.163921] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:03.485 [2024-12-15 02:26:28.163926] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:03.485 [2024-12-15 02:26:28.163933] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:03.485 [2024-12-15 02:26:28.163938] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:03.485 [2024-12-15 02:26:28.163945] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:03.485 [2024-12-15 02:26:28.163952] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:03.485 [2024-12-15 02:26:28.163959] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:03.485 [2024-12-15 02:26:28.163964] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:03.485 [2024-12-15 02:26:28.163971] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:03.485 [2024-12-15 02:26:28.163977] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:03.485 [2024-12-15 02:26:28.163984] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:03.485 [2024-12-15 02:26:28.163990] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.510 ms 00:31:03.486 [2024-12-15 02:26:28.163997] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:03.486 [2024-12-15 02:26:28.164025] mngt/ftl_mngt_misc.c: 165:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] NV cache data region needs scrubbing, this may take a while. 00:31:03.486 [2024-12-15 02:26:28.164035] mngt/ftl_mngt_misc.c: 166:ftl_mngt_scrub_nv_cache: *NOTICE*: [FTL][ftl0] Scrubbing 5 chunks 00:31:07.695 [2024-12-15 02:26:31.907152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:31.907239] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Scrub NV cache 00:31:07.695 [2024-12-15 02:26:31.907257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3743.111 ms 00:31:07.695 [2024-12-15 02:26:31.907268] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:31.939007] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:31.939067] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:07.695 [2024-12-15 02:26:31.939083] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.493 ms 00:31:07.695 [2024-12-15 02:26:31.939095] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:31.939264] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:31.939279] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:07.695 [2024-12-15 02:26:31.939289] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.094 ms 00:31:07.695 [2024-12-15 02:26:31.939307] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:31.974831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:31.974886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:07.695 [2024-12-15 02:26:31.974899] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 35.465 ms 00:31:07.695 [2024-12-15 02:26:31.974911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:31.974947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:31.974962] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:07.695 [2024-12-15 02:26:31.974971] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:07.695 [2024-12-15 02:26:31.974989] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:31.975561] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:31.975597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:07.695 [2024-12-15 02:26:31.975607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.517 ms 00:31:07.695 [2024-12-15 02:26:31.975618] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:31.975735] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:31.975746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:07.695 [2024-12-15 02:26:31.975757] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.092 ms 00:31:07.695 [2024-12-15 02:26:31.975770] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:31.993012] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:31.993242] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:07.695 [2024-12-15 02:26:31.993262] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 17.223 ms 00:31:07.695 [2024-12-15 02:26:31.993273] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.021871] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:07.695 [2024-12-15 02:26:32.025680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.025726] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:07.695 [2024-12-15 02:26:32.025741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 32.312 ms 00:31:07.695 [2024-12-15 02:26:32.025749] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.119174] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.119424] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear L2P 00:31:07.695 [2024-12-15 02:26:32.119455] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 93.374 ms 00:31:07.695 [2024-12-15 02:26:32.119465] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.119670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.119686] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:07.695 [2024-12-15 02:26:32.119700] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.157 ms 00:31:07.695 [2024-12-15 02:26:32.119708] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.145642] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.145690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial band info metadata 00:31:07.695 [2024-12-15 02:26:32.145706] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.876 ms 00:31:07.695 [2024-12-15 02:26:32.145715] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.170360] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.170403] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Save initial chunk info metadata 00:31:07.695 [2024-12-15 02:26:32.170419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.588 ms 00:31:07.695 [2024-12-15 02:26:32.170426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.171027] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.171045] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:07.695 [2024-12-15 02:26:32.171057] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.556 ms 00:31:07.695 [2024-12-15 02:26:32.171067] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.250717] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.250766] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Wipe P2L region 00:31:07.695 [2024-12-15 02:26:32.250786] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 79.606 ms 00:31:07.695 [2024-12-15 02:26:32.250795] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.277941] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.278146] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim map 00:31:07.695 [2024-12-15 02:26:32.278174] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.031 ms 00:31:07.695 [2024-12-15 02:26:32.278183] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.303383] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.303426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Clear trim log 00:31:07.695 [2024-12-15 02:26:32.303441] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.109 ms 00:31:07.695 [2024-12-15 02:26:32.303449] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.329527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.329571] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:07.695 [2024-12-15 02:26:32.329586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.028 ms 00:31:07.695 [2024-12-15 02:26:32.329593] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.695 [2024-12-15 02:26:32.329646] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.695 [2024-12-15 02:26:32.329656] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:07.696 [2024-12-15 02:26:32.329672] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:31:07.696 [2024-12-15 02:26:32.329680] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.696 [2024-12-15 02:26:32.329776] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:07.696 [2024-12-15 02:26:32.329789] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:07.696 [2024-12-15 02:26:32.329801] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.043 ms 00:31:07.696 [2024-12-15 02:26:32.329809] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:07.696 [2024-12-15 02:26:32.330997] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 4176.984 ms, result 0 00:31:07.696 { 00:31:07.696 "name": "ftl0", 00:31:07.696 "uuid": "4a227f38-bb55-495b-8e3b-2041cd8dcaa2" 00:31:07.696 } 00:31:07.696 02:26:32 ftl.ftl_restore_fast -- ftl/restore.sh@61 -- # echo '{"subsystems": [' 00:31:07.696 02:26:32 ftl.ftl_restore_fast -- ftl/restore.sh@62 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py save_subsystem_config -n bdev 00:31:07.957 02:26:32 ftl.ftl_restore_fast -- ftl/restore.sh@63 -- # echo ']}' 00:31:07.957 02:26:32 ftl.ftl_restore_fast -- ftl/restore.sh@65 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_ftl_unload -b ftl0 00:31:08.220 [2024-12-15 02:26:32.762330] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.762390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:31:08.220 [2024-12-15 02:26:32.762403] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:08.220 [2024-12-15 02:26:32.762414] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.762438] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:31:08.220 [2024-12-15 02:26:32.765322] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.765360] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:31:08.220 [2024-12-15 02:26:32.765375] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.863 ms 00:31:08.220 [2024-12-15 02:26:32.765383] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.765647] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.765663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:31:08.220 [2024-12-15 02:26:32.765675] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.234 ms 00:31:08.220 [2024-12-15 02:26:32.765683] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.768947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.769097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist L2P 00:31:08.220 [2024-12-15 02:26:32.769116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.245 ms 00:31:08.220 [2024-12-15 02:26:32.769125] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.775392] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.775527] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finish L2P trims 00:31:08.220 [2024-12-15 02:26:32.775595] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 6.236 ms 00:31:08.220 [2024-12-15 02:26:32.775619] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.801293] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.801452] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist NV cache metadata 00:31:08.220 [2024-12-15 02:26:32.801515] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.579 ms 00:31:08.220 [2024-12-15 02:26:32.801538] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.818546] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.818698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist valid map metadata 00:31:08.220 [2024-12-15 02:26:32.818767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 16.914 ms 00:31:08.220 [2024-12-15 02:26:32.818792] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.819010] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.819044] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist P2L metadata 00:31:08.220 [2024-12-15 02:26:32.819125] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.129 ms 00:31:08.220 [2024-12-15 02:26:32.819149] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.844444] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.844588] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist band info metadata 00:31:08.220 [2024-12-15 02:26:32.844651] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.256 ms 00:31:08.220 [2024-12-15 02:26:32.844674] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.869595] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.869737] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist trim metadata 00:31:08.220 [2024-12-15 02:26:32.869797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.823 ms 00:31:08.220 [2024-12-15 02:26:32.869819] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.894212] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.894350] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Persist superblock 00:31:08.220 [2024-12-15 02:26:32.894411] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.276 ms 00:31:08.220 [2024-12-15 02:26:32.894434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.918585] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.220 [2024-12-15 02:26:32.918733] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL clean state 00:31:08.220 [2024-12-15 02:26:32.918797] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 23.918 ms 00:31:08.220 [2024-12-15 02:26:32.918821] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.220 [2024-12-15 02:26:32.918920] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:31:08.220 [2024-12-15 02:26:32.918952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.918990] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919119] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919152] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919297] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919531] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919591] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919620] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919651] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919680] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919740] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:31:08.220 [2024-12-15 02:26:32.919850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.919882] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.919911] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.919968] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.919999] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920029] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920246] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920306] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920365] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920453] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920545] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920578] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920608] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920659] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920919] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.920952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921016] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921156] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921234] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921264] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921350] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921381] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921442] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921808] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921839] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921846] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921860] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921869] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921878] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921886] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921897] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921904] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921914] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921970] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.921998] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.922006] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.922018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.922025] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.922035] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.922043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.922052] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.922060] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:31:08.221 [2024-12-15 02:26:32.922070] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:31:08.222 [2024-12-15 02:26:32.922078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:31:08.222 [2024-12-15 02:26:32.922088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:31:08.222 [2024-12-15 02:26:32.922097] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:31:08.222 [2024-12-15 02:26:32.922108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:31:08.222 [2024-12-15 02:26:32.922116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:31:08.222 [2024-12-15 02:26:32.922125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:31:08.222 [2024-12-15 02:26:32.922142] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:31:08.222 [2024-12-15 02:26:32.922152] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a227f38-bb55-495b-8e3b-2041cd8dcaa2 00:31:08.222 [2024-12-15 02:26:32.922161] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:31:08.222 [2024-12-15 02:26:32.922173] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 960 00:31:08.222 [2024-12-15 02:26:32.922183] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:31:08.222 [2024-12-15 02:26:32.922193] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:31:08.222 [2024-12-15 02:26:32.922213] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:31:08.222 [2024-12-15 02:26:32.922224] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:31:08.222 [2024-12-15 02:26:32.922232] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:31:08.222 [2024-12-15 02:26:32.922241] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:31:08.222 [2024-12-15 02:26:32.922247] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:31:08.222 [2024-12-15 02:26:32.922257] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.222 [2024-12-15 02:26:32.922265] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:31:08.222 [2024-12-15 02:26:32.922276] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.340 ms 00:31:08.222 [2024-12-15 02:26:32.922287] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.222 [2024-12-15 02:26:32.935838] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.222 [2024-12-15 02:26:32.935880] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:31:08.222 [2024-12-15 02:26:32.935894] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.501 ms 00:31:08.222 [2024-12-15 02:26:32.935902] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.222 [2024-12-15 02:26:32.936324] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:08.222 [2024-12-15 02:26:32.936341] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:31:08.222 [2024-12-15 02:26:32.936355] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.378 ms 00:31:08.222 [2024-12-15 02:26:32.936364] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:32.982111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:32.982298] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:08.484 [2024-12-15 02:26:32.982322] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:32.982331] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:32.982406] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:32.982416] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:08.484 [2024-12-15 02:26:32.982429] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:32.982437] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:32.982536] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:32.982548] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:08.484 [2024-12-15 02:26:32.982558] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:32.982566] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:32.982588] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:32.982597] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:08.484 [2024-12-15 02:26:32.982607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:32.982617] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.066530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:33.066583] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:08.484 [2024-12-15 02:26:33.066600] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:33.066609] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.134495] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:33.134549] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:08.484 [2024-12-15 02:26:33.134564] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:33.134576] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.134670] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:33.134682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:08.484 [2024-12-15 02:26:33.134693] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:33.134701] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.134772] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:33.134782] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:08.484 [2024-12-15 02:26:33.134794] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:33.134802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.134909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:33.134919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:08.484 [2024-12-15 02:26:33.134930] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:33.134938] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.134975] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:33.134986] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:31:08.484 [2024-12-15 02:26:33.134997] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:33.135005] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.135050] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:33.135059] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:08.484 [2024-12-15 02:26:33.135070] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:33.135078] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.135130] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:31:08.484 [2024-12-15 02:26:33.135141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:08.484 [2024-12-15 02:26:33.135153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:31:08.484 [2024-12-15 02:26:33.135161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:08.484 [2024-12-15 02:26:33.135342] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL shutdown', duration = 372.965 ms, result 0 00:31:08.484 true 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- ftl/restore.sh@66 -- # killprocess 85766 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 85766 ']' 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 85766 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # uname 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 85766 00:31:08.484 killing process with pid 85766 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@972 -- # echo 'killing process with pid 85766' 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@973 -- # kill 85766 00:31:08.484 02:26:33 ftl.ftl_restore_fast -- common/autotest_common.sh@978 -- # wait 85766 00:31:15.092 02:26:39 ftl.ftl_restore_fast -- ftl/restore.sh@69 -- # dd if=/dev/urandom of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile bs=4K count=256K 00:31:19.303 262144+0 records in 00:31:19.303 262144+0 records out 00:31:19.303 1073741824 bytes (1.1 GB, 1.0 GiB) copied, 4.37372 s, 245 MB/s 00:31:19.303 02:26:43 ftl.ftl_restore_fast -- ftl/restore.sh@70 -- # md5sum /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:31:21.217 02:26:45 ftl.ftl_restore_fast -- ftl/restore.sh@73 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:31:21.217 [2024-12-15 02:26:45.978463] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:31:21.217 [2024-12-15 02:26:45.978586] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid85995 ] 00:31:21.479 [2024-12-15 02:26:46.140717] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:31:21.743 [2024-12-15 02:26:46.253063] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:31:22.030 [2024-12-15 02:26:46.530696] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:22.030 [2024-12-15 02:26:46.530933] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:31:22.030 [2024-12-15 02:26:46.688471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.688628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:31:22.030 [2024-12-15 02:26:46.688648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:31:22.030 [2024-12-15 02:26:46.688657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.688711] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.688723] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:31:22.030 [2024-12-15 02:26:46.688732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.033 ms 00:31:22.030 [2024-12-15 02:26:46.688739] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.688759] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:31:22.030 [2024-12-15 02:26:46.689436] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:31:22.030 [2024-12-15 02:26:46.689452] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.689460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:31:22.030 [2024-12-15 02:26:46.689469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.698 ms 00:31:22.030 [2024-12-15 02:26:46.689476] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.690562] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 0, shm_clean 0 00:31:22.030 [2024-12-15 02:26:46.703294] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.703426] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:31:22.030 [2024-12-15 02:26:46.703443] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.733 ms 00:31:22.030 [2024-12-15 02:26:46.703451] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.703581] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.703600] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:31:22.030 [2024-12-15 02:26:46.703608] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.021 ms 00:31:22.030 [2024-12-15 02:26:46.703616] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.708732] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.708762] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:31:22.030 [2024-12-15 02:26:46.708772] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.066 ms 00:31:22.030 [2024-12-15 02:26:46.708783] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.708856] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.708865] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:31:22.030 [2024-12-15 02:26:46.708873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.051 ms 00:31:22.030 [2024-12-15 02:26:46.708880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.708923] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.708932] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:31:22.030 [2024-12-15 02:26:46.708940] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:31:22.030 [2024-12-15 02:26:46.708946] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.708968] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:31:22.030 [2024-12-15 02:26:46.712491] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.712521] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:31:22.030 [2024-12-15 02:26:46.712532] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.527 ms 00:31:22.030 [2024-12-15 02:26:46.712540] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.712570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.712578] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:31:22.030 [2024-12-15 02:26:46.712586] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:31:22.030 [2024-12-15 02:26:46.712594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.712611] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:31:22.030 [2024-12-15 02:26:46.712630] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:31:22.030 [2024-12-15 02:26:46.712663] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:31:22.030 [2024-12-15 02:26:46.712680] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:31:22.030 [2024-12-15 02:26:46.712781] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:31:22.030 [2024-12-15 02:26:46.712791] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:31:22.030 [2024-12-15 02:26:46.712802] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:31:22.030 [2024-12-15 02:26:46.712811] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:31:22.030 [2024-12-15 02:26:46.712820] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:31:22.030 [2024-12-15 02:26:46.712828] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:31:22.030 [2024-12-15 02:26:46.712835] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:31:22.030 [2024-12-15 02:26:46.712842] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:31:22.030 [2024-12-15 02:26:46.712852] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:31:22.030 [2024-12-15 02:26:46.712859] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.712866] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:31:22.030 [2024-12-15 02:26:46.712873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.249 ms 00:31:22.030 [2024-12-15 02:26:46.712881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.712963] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.030 [2024-12-15 02:26:46.712971] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:31:22.030 [2024-12-15 02:26:46.712978] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:31:22.030 [2024-12-15 02:26:46.712985] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.030 [2024-12-15 02:26:46.713093] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:31:22.030 [2024-12-15 02:26:46.713103] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:31:22.030 [2024-12-15 02:26:46.713111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:22.030 [2024-12-15 02:26:46.713118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:22.030 [2024-12-15 02:26:46.713126] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:31:22.030 [2024-12-15 02:26:46.713132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:31:22.030 [2024-12-15 02:26:46.713138] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:31:22.030 [2024-12-15 02:26:46.713146] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:31:22.030 [2024-12-15 02:26:46.713153] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:31:22.030 [2024-12-15 02:26:46.713159] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:22.030 [2024-12-15 02:26:46.713166] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:31:22.030 [2024-12-15 02:26:46.713173] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:31:22.031 [2024-12-15 02:26:46.713179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:31:22.031 [2024-12-15 02:26:46.713192] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:31:22.031 [2024-12-15 02:26:46.713216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:31:22.031 [2024-12-15 02:26:46.713223] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713230] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:31:22.031 [2024-12-15 02:26:46.713236] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:31:22.031 [2024-12-15 02:26:46.713242] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713249] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:31:22.031 [2024-12-15 02:26:46.713255] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713262] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:22.031 [2024-12-15 02:26:46.713268] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:31:22.031 [2024-12-15 02:26:46.713275] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713282] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:22.031 [2024-12-15 02:26:46.713288] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:31:22.031 [2024-12-15 02:26:46.713294] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713300] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:22.031 [2024-12-15 02:26:46.713307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:31:22.031 [2024-12-15 02:26:46.713313] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713319] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:31:22.031 [2024-12-15 02:26:46.713326] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:31:22.031 [2024-12-15 02:26:46.713332] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713338] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:22.031 [2024-12-15 02:26:46.713345] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:31:22.031 [2024-12-15 02:26:46.713351] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:31:22.031 [2024-12-15 02:26:46.713357] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:31:22.031 [2024-12-15 02:26:46.713364] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:31:22.031 [2024-12-15 02:26:46.713370] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:31:22.031 [2024-12-15 02:26:46.713376] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713382] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:31:22.031 [2024-12-15 02:26:46.713388] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:31:22.031 [2024-12-15 02:26:46.713395] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713402] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:31:22.031 [2024-12-15 02:26:46.713409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:31:22.031 [2024-12-15 02:26:46.713416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:31:22.031 [2024-12-15 02:26:46.713425] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:31:22.031 [2024-12-15 02:26:46.713433] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:31:22.031 [2024-12-15 02:26:46.713439] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:31:22.031 [2024-12-15 02:26:46.713446] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:31:22.031 [2024-12-15 02:26:46.713452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:31:22.031 [2024-12-15 02:26:46.713459] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:31:22.031 [2024-12-15 02:26:46.713465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:31:22.031 [2024-12-15 02:26:46.713474] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:31:22.031 [2024-12-15 02:26:46.713482] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:22.031 [2024-12-15 02:26:46.713493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:31:22.031 [2024-12-15 02:26:46.713500] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:31:22.031 [2024-12-15 02:26:46.713507] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:31:22.031 [2024-12-15 02:26:46.713514] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:31:22.031 [2024-12-15 02:26:46.713521] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:31:22.031 [2024-12-15 02:26:46.713527] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:31:22.031 [2024-12-15 02:26:46.713534] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:31:22.031 [2024-12-15 02:26:46.713541] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:31:22.031 [2024-12-15 02:26:46.713548] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:31:22.031 [2024-12-15 02:26:46.713555] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:31:22.031 [2024-12-15 02:26:46.713562] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:31:22.031 [2024-12-15 02:26:46.713569] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:31:22.031 [2024-12-15 02:26:46.713575] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:31:22.031 [2024-12-15 02:26:46.713582] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:31:22.031 [2024-12-15 02:26:46.713589] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:31:22.031 [2024-12-15 02:26:46.713597] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:31:22.031 [2024-12-15 02:26:46.713605] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:31:22.031 [2024-12-15 02:26:46.713612] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:31:22.031 [2024-12-15 02:26:46.713620] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:31:22.031 [2024-12-15 02:26:46.713627] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:31:22.031 [2024-12-15 02:26:46.713634] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.031 [2024-12-15 02:26:46.713641] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:31:22.031 [2024-12-15 02:26:46.713648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.609 ms 00:31:22.031 [2024-12-15 02:26:46.713656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.031 [2024-12-15 02:26:46.740249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.031 [2024-12-15 02:26:46.740284] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:31:22.031 [2024-12-15 02:26:46.740295] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 26.552 ms 00:31:22.031 [2024-12-15 02:26:46.740306] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.031 [2024-12-15 02:26:46.740391] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.031 [2024-12-15 02:26:46.740399] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:31:22.031 [2024-12-15 02:26:46.740406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:31:22.031 [2024-12-15 02:26:46.740413] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.791048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.791191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:31:22.299 [2024-12-15 02:26:46.791220] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.578 ms 00:31:22.299 [2024-12-15 02:26:46.791229] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.791270] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.791280] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:31:22.299 [2024-12-15 02:26:46.791293] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.003 ms 00:31:22.299 [2024-12-15 02:26:46.791301] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.791710] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.791727] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:31:22.299 [2024-12-15 02:26:46.791736] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.345 ms 00:31:22.299 [2024-12-15 02:26:46.791743] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.791869] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.791878] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:31:22.299 [2024-12-15 02:26:46.791889] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.108 ms 00:31:22.299 [2024-12-15 02:26:46.791896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.805619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.805658] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:31:22.299 [2024-12-15 02:26:46.805668] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.703 ms 00:31:22.299 [2024-12-15 02:26:46.805676] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.818615] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 0, empty chunks = 4 00:31:22.299 [2024-12-15 02:26:46.818749] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:31:22.299 [2024-12-15 02:26:46.818766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.818774] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:31:22.299 [2024-12-15 02:26:46.818783] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.998 ms 00:31:22.299 [2024-12-15 02:26:46.818791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.843168] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.843223] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:31:22.299 [2024-12-15 02:26:46.843233] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 24.343 ms 00:31:22.299 [2024-12-15 02:26:46.843241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.855200] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.855246] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:31:22.299 [2024-12-15 02:26:46.855257] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.910 ms 00:31:22.299 [2024-12-15 02:26:46.855263] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.866726] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.866759] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:31:22.299 [2024-12-15 02:26:46.866770] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 11.428 ms 00:31:22.299 [2024-12-15 02:26:46.866777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.867399] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.867423] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:31:22.299 [2024-12-15 02:26:46.867433] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.535 ms 00:31:22.299 [2024-12-15 02:26:46.867443] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.927370] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.927434] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:31:22.299 [2024-12-15 02:26:46.927448] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 59.907 ms 00:31:22.299 [2024-12-15 02:26:46.927464] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.938557] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:31:22.299 [2024-12-15 02:26:46.941578] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.941751] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:31:22.299 [2024-12-15 02:26:46.941771] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.060 ms 00:31:22.299 [2024-12-15 02:26:46.941780] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.941891] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.941902] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:31:22.299 [2024-12-15 02:26:46.941912] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:31:22.299 [2024-12-15 02:26:46.941920] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.942011] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.942022] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:31:22.299 [2024-12-15 02:26:46.942031] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.035 ms 00:31:22.299 [2024-12-15 02:26:46.942039] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.942063] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.942072] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:31:22.299 [2024-12-15 02:26:46.942080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:31:22.299 [2024-12-15 02:26:46.942089] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.299 [2024-12-15 02:26:46.942123] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:31:22.299 [2024-12-15 02:26:46.942137] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.299 [2024-12-15 02:26:46.942145] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:31:22.299 [2024-12-15 02:26:46.942153] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:31:22.299 [2024-12-15 02:26:46.942161] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.300 [2024-12-15 02:26:46.967694] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.300 [2024-12-15 02:26:46.967744] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:31:22.300 [2024-12-15 02:26:46.967758] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 25.510 ms 00:31:22.300 [2024-12-15 02:26:46.967772] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.300 [2024-12-15 02:26:46.967854] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:31:22.300 [2024-12-15 02:26:46.967864] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:31:22.300 [2024-12-15 02:26:46.967873] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:31:22.300 [2024-12-15 02:26:46.967881] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:31:22.300 [2024-12-15 02:26:46.969129] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 280.178 ms, result 0 00:31:23.245  [2024-12-15T02:26:49.382Z] Copying: 10/1024 [MB] (10 MBps) [2024-12-15T02:26:50.320Z] Copying: 45/1024 [MB] (34 MBps) [2024-12-15T02:26:51.264Z] Copying: 69/1024 [MB] (24 MBps) [2024-12-15T02:26:52.202Z] Copying: 79/1024 [MB] (10 MBps) [2024-12-15T02:26:53.135Z] Copying: 110/1024 [MB] (31 MBps) [2024-12-15T02:26:54.068Z] Copying: 159/1024 [MB] (49 MBps) [2024-12-15T02:26:55.000Z] Copying: 197/1024 [MB] (38 MBps) [2024-12-15T02:26:56.383Z] Copying: 228/1024 [MB] (30 MBps) [2024-12-15T02:26:57.327Z] Copying: 248/1024 [MB] (20 MBps) [2024-12-15T02:26:58.272Z] Copying: 264944/1048576 [kB] (10128 kBps) [2024-12-15T02:26:59.216Z] Copying: 268/1024 [MB] (10 MBps) [2024-12-15T02:27:00.152Z] Copying: 280/1024 [MB] (11 MBps) [2024-12-15T02:27:01.085Z] Copying: 300/1024 [MB] (20 MBps) [2024-12-15T02:27:02.017Z] Copying: 334/1024 [MB] (33 MBps) [2024-12-15T02:27:03.392Z] Copying: 384/1024 [MB] (50 MBps) [2024-12-15T02:27:04.336Z] Copying: 435/1024 [MB] (50 MBps) [2024-12-15T02:27:05.291Z] Copying: 446/1024 [MB] (11 MBps) [2024-12-15T02:27:06.227Z] Copying: 456/1024 [MB] (10 MBps) [2024-12-15T02:27:07.162Z] Copying: 500/1024 [MB] (43 MBps) [2024-12-15T02:27:08.097Z] Copying: 535/1024 [MB] (35 MBps) [2024-12-15T02:27:09.035Z] Copying: 575/1024 [MB] (40 MBps) [2024-12-15T02:27:10.051Z] Copying: 592/1024 [MB] (16 MBps) [2024-12-15T02:27:11.422Z] Copying: 621/1024 [MB] (29 MBps) [2024-12-15T02:27:11.987Z] Copying: 653/1024 [MB] (32 MBps) [2024-12-15T02:27:13.371Z] Copying: 691/1024 [MB] (37 MBps) [2024-12-15T02:27:14.307Z] Copying: 716/1024 [MB] (25 MBps) [2024-12-15T02:27:15.250Z] Copying: 732/1024 [MB] (16 MBps) [2024-12-15T02:27:16.193Z] Copying: 777/1024 [MB] (44 MBps) [2024-12-15T02:27:17.126Z] Copying: 802/1024 [MB] (25 MBps) [2024-12-15T02:27:18.064Z] Copying: 826/1024 [MB] (23 MBps) [2024-12-15T02:27:19.001Z] Copying: 850/1024 [MB] (24 MBps) [2024-12-15T02:27:20.373Z] Copying: 868/1024 [MB] (17 MBps) [2024-12-15T02:27:21.051Z] Copying: 895/1024 [MB] (26 MBps) [2024-12-15T02:27:22.003Z] Copying: 930/1024 [MB] (35 MBps) [2024-12-15T02:27:23.389Z] Copying: 950/1024 [MB] (19 MBps) [2024-12-15T02:27:24.333Z] Copying: 964/1024 [MB] (14 MBps) [2024-12-15T02:27:25.282Z] Copying: 985/1024 [MB] (20 MBps) [2024-12-15T02:27:26.224Z] Copying: 996/1024 [MB] (11 MBps) [2024-12-15T02:27:26.797Z] Copying: 1010/1024 [MB] (14 MBps) [2024-12-15T02:27:26.797Z] Copying: 1024/1024 [MB] (average 25 MBps)[2024-12-15 02:27:26.669042] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.032 [2024-12-15 02:27:26.669105] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:32:02.032 [2024-12-15 02:27:26.669120] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:02.032 [2024-12-15 02:27:26.669129] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.032 [2024-12-15 02:27:26.669151] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:32:02.032 [2024-12-15 02:27:26.672351] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.032 [2024-12-15 02:27:26.672394] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:32:02.032 [2024-12-15 02:27:26.672406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.183 ms 00:32:02.032 [2024-12-15 02:27:26.672421] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.032 [2024-12-15 02:27:26.674795] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.032 [2024-12-15 02:27:26.674844] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:32:02.032 [2024-12-15 02:27:26.674855] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.347 ms 00:32:02.032 [2024-12-15 02:27:26.674863] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.032 [2024-12-15 02:27:26.674890] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.032 [2024-12-15 02:27:26.674899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:32:02.032 [2024-12-15 02:27:26.674908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:02.032 [2024-12-15 02:27:26.674916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.032 [2024-12-15 02:27:26.674982] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.032 [2024-12-15 02:27:26.674991] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:32:02.032 [2024-12-15 02:27:26.674999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.023 ms 00:32:02.032 [2024-12-15 02:27:26.675008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.032 [2024-12-15 02:27:26.675021] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:32:02.032 [2024-12-15 02:27:26.675034] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675043] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675050] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675058] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675079] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675093] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675108] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675116] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675130] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675139] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675161] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675170] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675178] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:32:02.032 [2024-12-15 02:27:26.675186] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675222] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675245] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675259] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675276] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675313] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675321] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675335] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675342] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675349] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675356] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675363] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675370] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675378] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675385] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675393] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675401] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675434] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675441] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675471] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675478] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675486] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675492] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675505] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675512] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675520] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675527] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675534] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675542] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675549] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675556] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675563] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675570] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675577] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675585] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675592] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675599] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675606] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675614] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675621] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675629] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675636] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675646] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675653] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675660] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675667] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675676] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675683] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675691] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675698] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675705] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675727] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675742] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:32:02.033 [2024-12-15 02:27:26.675749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:32:02.034 [2024-12-15 02:27:26.675756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:32:02.034 [2024-12-15 02:27:26.675764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:32:02.034 [2024-12-15 02:27:26.675771] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:32:02.034 [2024-12-15 02:27:26.675778] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:32:02.034 [2024-12-15 02:27:26.675785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:32:02.034 [2024-12-15 02:27:26.675792] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:32:02.034 [2024-12-15 02:27:26.675800] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:32:02.034 [2024-12-15 02:27:26.675815] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:32:02.034 [2024-12-15 02:27:26.675823] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a227f38-bb55-495b-8e3b-2041cd8dcaa2 00:32:02.034 [2024-12-15 02:27:26.675831] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:32:02.034 [2024-12-15 02:27:26.675838] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:32:02.034 [2024-12-15 02:27:26.675845] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:32:02.034 [2024-12-15 02:27:26.675856] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:32:02.034 [2024-12-15 02:27:26.675862] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:32:02.034 [2024-12-15 02:27:26.675871] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:32:02.034 [2024-12-15 02:27:26.675879] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:32:02.034 [2024-12-15 02:27:26.675886] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:32:02.034 [2024-12-15 02:27:26.675893] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:32:02.034 [2024-12-15 02:27:26.675899] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.034 [2024-12-15 02:27:26.675907] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:32:02.034 [2024-12-15 02:27:26.675915] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.879 ms 00:32:02.034 [2024-12-15 02:27:26.675923] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.034 [2024-12-15 02:27:26.689964] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.034 [2024-12-15 02:27:26.690142] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:32:02.034 [2024-12-15 02:27:26.690227] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.025 ms 00:32:02.034 [2024-12-15 02:27:26.690256] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.034 [2024-12-15 02:27:26.690660] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:02.034 [2024-12-15 02:27:26.690746] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:32:02.034 [2024-12-15 02:27:26.690793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.364 ms 00:32:02.034 [2024-12-15 02:27:26.690816] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.034 [2024-12-15 02:27:26.727440] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.034 [2024-12-15 02:27:26.727595] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:02.034 [2024-12-15 02:27:26.727652] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.034 [2024-12-15 02:27:26.727675] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.034 [2024-12-15 02:27:26.727757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.034 [2024-12-15 02:27:26.727781] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:02.034 [2024-12-15 02:27:26.727802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.034 [2024-12-15 02:27:26.727820] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.034 [2024-12-15 02:27:26.727892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.034 [2024-12-15 02:27:26.727969] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:02.034 [2024-12-15 02:27:26.727989] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.034 [2024-12-15 02:27:26.728008] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.034 [2024-12-15 02:27:26.728034] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.034 [2024-12-15 02:27:26.728055] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:02.034 [2024-12-15 02:27:26.728080] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.034 [2024-12-15 02:27:26.728135] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.295 [2024-12-15 02:27:26.812450] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.295 [2024-12-15 02:27:26.812640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:02.295 [2024-12-15 02:27:26.812698] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.295 [2024-12-15 02:27:26.812722] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.295 [2024-12-15 02:27:26.882117] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.295 [2024-12-15 02:27:26.882338] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:02.295 [2024-12-15 02:27:26.882402] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.295 [2024-12-15 02:27:26.882426] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.295 [2024-12-15 02:27:26.882553] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.295 [2024-12-15 02:27:26.882582] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:02.295 [2024-12-15 02:27:26.882607] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.295 [2024-12-15 02:27:26.882626] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.295 [2024-12-15 02:27:26.882666] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.295 [2024-12-15 02:27:26.882676] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:02.295 [2024-12-15 02:27:26.882684] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.295 [2024-12-15 02:27:26.882692] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.295 [2024-12-15 02:27:26.882775] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.295 [2024-12-15 02:27:26.882785] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:02.295 [2024-12-15 02:27:26.882802] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.295 [2024-12-15 02:27:26.882812] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.295 [2024-12-15 02:27:26.882839] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.295 [2024-12-15 02:27:26.882848] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:32:02.295 [2024-12-15 02:27:26.882857] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.295 [2024-12-15 02:27:26.882865] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.295 [2024-12-15 02:27:26.882906] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.295 [2024-12-15 02:27:26.882915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:02.295 [2024-12-15 02:27:26.882924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.295 [2024-12-15 02:27:26.882935] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.295 [2024-12-15 02:27:26.882980] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:32:02.295 [2024-12-15 02:27:26.882990] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:02.296 [2024-12-15 02:27:26.882999] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:32:02.296 [2024-12-15 02:27:26.883007] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:02.296 [2024-12-15 02:27:26.883140] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 214.069 ms, result 0 00:32:03.238 00:32:03.238 00:32:03.238 02:27:27 ftl.ftl_restore_fast -- ftl/restore.sh@74 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --count=262144 00:32:03.238 [2024-12-15 02:27:27.760503] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:32:03.238 [2024-12-15 02:27:27.760655] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid86432 ] 00:32:03.238 [2024-12-15 02:27:27.924654] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:32:03.498 [2024-12-15 02:27:28.035455] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:32:03.761 [2024-12-15 02:27:28.330878] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:03.761 [2024-12-15 02:27:28.330965] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:32:03.761 [2024-12-15 02:27:28.491937] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.492007] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:32:03.761 [2024-12-15 02:27:28.492022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:03.761 [2024-12-15 02:27:28.492030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.492086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.492100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:32:03.761 [2024-12-15 02:27:28.492109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:03.761 [2024-12-15 02:27:28.492117] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.492137] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:32:03.761 [2024-12-15 02:27:28.492863] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:32:03.761 [2024-12-15 02:27:28.492889] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.492899] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:32:03.761 [2024-12-15 02:27:28.492908] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.756 ms 00:32:03.761 [2024-12-15 02:27:28.492916] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.493225] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:32:03.761 [2024-12-15 02:27:28.493251] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.493263] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:32:03.761 [2024-12-15 02:27:28.493273] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.029 ms 00:32:03.761 [2024-12-15 02:27:28.493281] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.493393] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.493405] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:32:03.761 [2024-12-15 02:27:28.493413] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.038 ms 00:32:03.761 [2024-12-15 02:27:28.493420] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.493693] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.493704] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:32:03.761 [2024-12-15 02:27:28.493714] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.238 ms 00:32:03.761 [2024-12-15 02:27:28.493721] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.493794] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.493811] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:32:03.761 [2024-12-15 02:27:28.493819] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:32:03.761 [2024-12-15 02:27:28.493827] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.493849] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.493857] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:32:03.761 [2024-12-15 02:27:28.493868] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:32:03.761 [2024-12-15 02:27:28.493876] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.493897] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:32:03.761 [2024-12-15 02:27:28.498273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.498312] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:32:03.761 [2024-12-15 02:27:28.498324] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.379 ms 00:32:03.761 [2024-12-15 02:27:28.498332] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.498372] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.498381] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:32:03.761 [2024-12-15 02:27:28.498389] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.014 ms 00:32:03.761 [2024-12-15 02:27:28.498397] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.498455] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:32:03.761 [2024-12-15 02:27:28.498480] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:32:03.761 [2024-12-15 02:27:28.498519] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:32:03.761 [2024-12-15 02:27:28.498535] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:32:03.761 [2024-12-15 02:27:28.498639] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:32:03.761 [2024-12-15 02:27:28.498650] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:32:03.761 [2024-12-15 02:27:28.498661] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:32:03.761 [2024-12-15 02:27:28.498671] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:32:03.761 [2024-12-15 02:27:28.498680] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:32:03.761 [2024-12-15 02:27:28.498692] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:32:03.761 [2024-12-15 02:27:28.498699] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:32:03.761 [2024-12-15 02:27:28.498707] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:32:03.761 [2024-12-15 02:27:28.498714] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:32:03.761 [2024-12-15 02:27:28.498722] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.498730] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:32:03.761 [2024-12-15 02:27:28.498738] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.270 ms 00:32:03.761 [2024-12-15 02:27:28.498745] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.498828] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.761 [2024-12-15 02:27:28.498836] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:32:03.761 [2024-12-15 02:27:28.498844] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.069 ms 00:32:03.761 [2024-12-15 02:27:28.498854] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:03.761 [2024-12-15 02:27:28.498952] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:32:03.761 [2024-12-15 02:27:28.498962] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:32:03.761 [2024-12-15 02:27:28.498970] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:03.761 [2024-12-15 02:27:28.498978] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.761 [2024-12-15 02:27:28.498986] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:32:03.761 [2024-12-15 02:27:28.498993] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:32:03.761 [2024-12-15 02:27:28.499001] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:32:03.761 [2024-12-15 02:27:28.499009] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:32:03.761 [2024-12-15 02:27:28.499016] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:32:03.761 [2024-12-15 02:27:28.499023] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:03.762 [2024-12-15 02:27:28.499029] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:32:03.762 [2024-12-15 02:27:28.499039] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:32:03.762 [2024-12-15 02:27:28.499046] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:32:03.762 [2024-12-15 02:27:28.499053] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:32:03.762 [2024-12-15 02:27:28.499060] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:32:03.762 [2024-12-15 02:27:28.499075] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499082] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:32:03.762 [2024-12-15 02:27:28.499089] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:32:03.762 [2024-12-15 02:27:28.499096] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499104] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:32:03.762 [2024-12-15 02:27:28.499111] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499118] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:03.762 [2024-12-15 02:27:28.499125] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:32:03.762 [2024-12-15 02:27:28.499132] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499139] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:03.762 [2024-12-15 02:27:28.499145] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:32:03.762 [2024-12-15 02:27:28.499152] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499158] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:03.762 [2024-12-15 02:27:28.499165] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:32:03.762 [2024-12-15 02:27:28.499172] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499179] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:32:03.762 [2024-12-15 02:27:28.499186] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:32:03.762 [2024-12-15 02:27:28.499218] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499226] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:03.762 [2024-12-15 02:27:28.499233] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:32:03.762 [2024-12-15 02:27:28.499241] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:32:03.762 [2024-12-15 02:27:28.499247] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:32:03.762 [2024-12-15 02:27:28.499254] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:32:03.762 [2024-12-15 02:27:28.499262] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:32:03.762 [2024-12-15 02:27:28.499268] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499276] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:32:03.762 [2024-12-15 02:27:28.499284] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:32:03.762 [2024-12-15 02:27:28.499291] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499300] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:32:03.762 [2024-12-15 02:27:28.499309] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:32:03.762 [2024-12-15 02:27:28.499317] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:32:03.762 [2024-12-15 02:27:28.499325] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:32:03.762 [2024-12-15 02:27:28.499336] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:32:03.762 [2024-12-15 02:27:28.499344] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:32:03.762 [2024-12-15 02:27:28.499351] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:32:03.762 [2024-12-15 02:27:28.499359] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:32:03.762 [2024-12-15 02:27:28.499366] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:32:03.762 [2024-12-15 02:27:28.499373] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:32:03.762 [2024-12-15 02:27:28.499382] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:32:03.762 [2024-12-15 02:27:28.499391] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:03.762 [2024-12-15 02:27:28.499400] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:32:03.762 [2024-12-15 02:27:28.499407] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:32:03.762 [2024-12-15 02:27:28.499415] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:32:03.762 [2024-12-15 02:27:28.499422] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:32:03.762 [2024-12-15 02:27:28.499430] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:32:03.762 [2024-12-15 02:27:28.499437] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:32:03.762 [2024-12-15 02:27:28.499443] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:32:03.762 [2024-12-15 02:27:28.499450] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:32:03.762 [2024-12-15 02:27:28.499458] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:32:03.762 [2024-12-15 02:27:28.499464] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:32:03.762 [2024-12-15 02:27:28.499472] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:32:03.762 [2024-12-15 02:27:28.499479] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:32:03.762 [2024-12-15 02:27:28.499486] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:32:03.762 [2024-12-15 02:27:28.499493] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:32:03.762 [2024-12-15 02:27:28.499501] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:32:03.762 [2024-12-15 02:27:28.499510] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:32:03.762 [2024-12-15 02:27:28.499519] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:32:03.762 [2024-12-15 02:27:28.499527] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:32:03.762 [2024-12-15 02:27:28.499534] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:32:03.762 [2024-12-15 02:27:28.499541] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:32:03.762 [2024-12-15 02:27:28.499551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:03.762 [2024-12-15 02:27:28.499559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:32:03.762 [2024-12-15 02:27:28.499567] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.668 ms 00:32:03.762 [2024-12-15 02:27:28.499575] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.527504] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.527690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:32:04.024 [2024-12-15 02:27:28.527767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.885 ms 00:32:04.024 [2024-12-15 02:27:28.527791] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.527897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.527919] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:32:04.024 [2024-12-15 02:27:28.527985] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.065 ms 00:32:04.024 [2024-12-15 02:27:28.528009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.576246] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.576450] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:32:04.024 [2024-12-15 02:27:28.576520] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 48.160 ms 00:32:04.024 [2024-12-15 02:27:28.576545] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.576619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.576645] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:32:04.024 [2024-12-15 02:27:28.576667] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:32:04.024 [2024-12-15 02:27:28.576686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.576816] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.576957] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:32:04.024 [2024-12-15 02:27:28.576983] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.060 ms 00:32:04.024 [2024-12-15 02:27:28.577004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.577158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.577568] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:32:04.024 [2024-12-15 02:27:28.577624] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:32:04.024 [2024-12-15 02:27:28.577647] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.593423] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.593587] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:32:04.024 [2024-12-15 02:27:28.593644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.738 ms 00:32:04.024 [2024-12-15 02:27:28.593667] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.593844] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:32:04.024 [2024-12-15 02:27:28.593888] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:32:04.024 [2024-12-15 02:27:28.593920] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.594023] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:32:04.024 [2024-12-15 02:27:28.594049] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.099 ms 00:32:04.024 [2024-12-15 02:27:28.594069] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.606394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.606539] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:32:04.024 [2024-12-15 02:27:28.606599] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.287 ms 00:32:04.024 [2024-12-15 02:27:28.606621] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.606766] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.606790] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:32:04.024 [2024-12-15 02:27:28.606810] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:32:04.024 [2024-12-15 02:27:28.606880] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.606952] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.606977] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:32:04.024 [2024-12-15 02:27:28.607009] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:32:04.024 [2024-12-15 02:27:28.607027] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.607629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.607672] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:32:04.024 [2024-12-15 02:27:28.607695] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.546 ms 00:32:04.024 [2024-12-15 02:27:28.607714] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.607750] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:32:04.024 [2024-12-15 02:27:28.607851] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.607872] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:32:04.024 [2024-12-15 02:27:28.607893] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.101 ms 00:32:04.024 [2024-12-15 02:27:28.607911] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.620763] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:32:04.024 [2024-12-15 02:27:28.621048] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.621083] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:32:04.024 [2024-12-15 02:27:28.621150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.053 ms 00:32:04.024 [2024-12-15 02:27:28.621173] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.623303] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.623439] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:32:04.024 [2024-12-15 02:27:28.623496] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.061 ms 00:32:04.024 [2024-12-15 02:27:28.623517] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.623629] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.623657] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:32:04.024 [2024-12-15 02:27:28.623678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:32:04.024 [2024-12-15 02:27:28.623697] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.623787] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.623820] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:32:04.024 [2024-12-15 02:27:28.623842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:32:04.024 [2024-12-15 02:27:28.623861] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.024 [2024-12-15 02:27:28.623908] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:32:04.024 [2024-12-15 02:27:28.624377] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.024 [2024-12-15 02:27:28.624390] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:32:04.025 [2024-12-15 02:27:28.624401] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.470 ms 00:32:04.025 [2024-12-15 02:27:28.624408] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.025 [2024-12-15 02:27:28.651521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.025 [2024-12-15 02:27:28.651698] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:32:04.025 [2024-12-15 02:27:28.651760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.086 ms 00:32:04.025 [2024-12-15 02:27:28.651784] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.025 [2024-12-15 02:27:28.651874] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:32:04.025 [2024-12-15 02:27:28.651900] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:32:04.025 [2024-12-15 02:27:28.651921] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:32:04.025 [2024-12-15 02:27:28.651941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:32:04.025 [2024-12-15 02:27:28.653343] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 160.908 ms, result 0 00:32:05.409  [2024-12-15T02:27:31.118Z] Copying: 18/1024 [MB] (18 MBps) [2024-12-15T02:27:32.061Z] Copying: 31/1024 [MB] (12 MBps) [2024-12-15T02:27:33.004Z] Copying: 41/1024 [MB] (10 MBps) [2024-12-15T02:27:33.945Z] Copying: 59/1024 [MB] (17 MBps) [2024-12-15T02:27:34.885Z] Copying: 71/1024 [MB] (11 MBps) [2024-12-15T02:27:36.268Z] Copying: 94/1024 [MB] (23 MBps) [2024-12-15T02:27:37.208Z] Copying: 115/1024 [MB] (21 MBps) [2024-12-15T02:27:38.149Z] Copying: 137/1024 [MB] (22 MBps) [2024-12-15T02:27:39.092Z] Copying: 156/1024 [MB] (18 MBps) [2024-12-15T02:27:40.036Z] Copying: 172/1024 [MB] (16 MBps) [2024-12-15T02:27:40.978Z] Copying: 194/1024 [MB] (21 MBps) [2024-12-15T02:27:41.920Z] Copying: 217/1024 [MB] (22 MBps) [2024-12-15T02:27:42.862Z] Copying: 239/1024 [MB] (21 MBps) [2024-12-15T02:27:44.247Z] Copying: 263/1024 [MB] (24 MBps) [2024-12-15T02:27:45.186Z] Copying: 280/1024 [MB] (17 MBps) [2024-12-15T02:27:46.131Z] Copying: 313/1024 [MB] (32 MBps) [2024-12-15T02:27:47.075Z] Copying: 335/1024 [MB] (22 MBps) [2024-12-15T02:27:48.021Z] Copying: 345/1024 [MB] (10 MBps) [2024-12-15T02:27:48.967Z] Copying: 355/1024 [MB] (10 MBps) [2024-12-15T02:27:49.913Z] Copying: 366/1024 [MB] (10 MBps) [2024-12-15T02:27:50.859Z] Copying: 376/1024 [MB] (10 MBps) [2024-12-15T02:27:52.248Z] Copying: 395612/1048576 [kB] (10168 kBps) [2024-12-15T02:27:53.193Z] Copying: 396/1024 [MB] (10 MBps) [2024-12-15T02:27:54.136Z] Copying: 406/1024 [MB] (10 MBps) [2024-12-15T02:27:55.079Z] Copying: 416/1024 [MB] (10 MBps) [2024-12-15T02:27:56.026Z] Copying: 427/1024 [MB] (11 MBps) [2024-12-15T02:27:57.017Z] Copying: 447/1024 [MB] (20 MBps) [2024-12-15T02:27:58.009Z] Copying: 467/1024 [MB] (20 MBps) [2024-12-15T02:27:58.954Z] Copying: 487/1024 [MB] (19 MBps) [2024-12-15T02:27:59.898Z] Copying: 510/1024 [MB] (23 MBps) [2024-12-15T02:28:00.844Z] Copying: 526/1024 [MB] (15 MBps) [2024-12-15T02:28:02.231Z] Copying: 542/1024 [MB] (15 MBps) [2024-12-15T02:28:03.176Z] Copying: 559/1024 [MB] (17 MBps) [2024-12-15T02:28:04.124Z] Copying: 573/1024 [MB] (14 MBps) [2024-12-15T02:28:05.068Z] Copying: 587/1024 [MB] (13 MBps) [2024-12-15T02:28:06.012Z] Copying: 600/1024 [MB] (13 MBps) [2024-12-15T02:28:06.956Z] Copying: 617/1024 [MB] (17 MBps) [2024-12-15T02:28:07.901Z] Copying: 637/1024 [MB] (20 MBps) [2024-12-15T02:28:08.846Z] Copying: 659/1024 [MB] (21 MBps) [2024-12-15T02:28:10.237Z] Copying: 671/1024 [MB] (12 MBps) [2024-12-15T02:28:11.183Z] Copying: 686/1024 [MB] (14 MBps) [2024-12-15T02:28:12.125Z] Copying: 705/1024 [MB] (19 MBps) [2024-12-15T02:28:13.071Z] Copying: 724/1024 [MB] (19 MBps) [2024-12-15T02:28:14.016Z] Copying: 735/1024 [MB] (10 MBps) [2024-12-15T02:28:14.963Z] Copying: 746/1024 [MB] (10 MBps) [2024-12-15T02:28:15.910Z] Copying: 756/1024 [MB] (10 MBps) [2024-12-15T02:28:16.855Z] Copying: 767/1024 [MB] (10 MBps) [2024-12-15T02:28:18.243Z] Copying: 777/1024 [MB] (10 MBps) [2024-12-15T02:28:18.844Z] Copying: 787/1024 [MB] (10 MBps) [2024-12-15T02:28:20.231Z] Copying: 797/1024 [MB] (10 MBps) [2024-12-15T02:28:21.181Z] Copying: 807/1024 [MB] (10 MBps) [2024-12-15T02:28:22.120Z] Copying: 817/1024 [MB] (10 MBps) [2024-12-15T02:28:23.063Z] Copying: 838/1024 [MB] (20 MBps) [2024-12-15T02:28:24.007Z] Copying: 848/1024 [MB] (10 MBps) [2024-12-15T02:28:24.950Z] Copying: 858/1024 [MB] (10 MBps) [2024-12-15T02:28:25.891Z] Copying: 881/1024 [MB] (22 MBps) [2024-12-15T02:28:27.277Z] Copying: 896/1024 [MB] (15 MBps) [2024-12-15T02:28:27.848Z] Copying: 915/1024 [MB] (18 MBps) [2024-12-15T02:28:29.233Z] Copying: 937/1024 [MB] (22 MBps) [2024-12-15T02:28:30.175Z] Copying: 958/1024 [MB] (20 MBps) [2024-12-15T02:28:31.118Z] Copying: 979/1024 [MB] (20 MBps) [2024-12-15T02:28:32.060Z] Copying: 1003/1024 [MB] (23 MBps) [2024-12-15T02:28:32.322Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-15 02:28:32.208160] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.557 [2024-12-15 02:28:32.208294] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:33:07.557 [2024-12-15 02:28:32.208316] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:07.557 [2024-12-15 02:28:32.208327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.557 [2024-12-15 02:28:32.208360] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:33:07.557 [2024-12-15 02:28:32.212214] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.557 [2024-12-15 02:28:32.212261] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:33:07.557 [2024-12-15 02:28:32.212274] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.836 ms 00:33:07.557 [2024-12-15 02:28:32.212285] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.557 [2024-12-15 02:28:32.212548] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.557 [2024-12-15 02:28:32.212567] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:33:07.557 [2024-12-15 02:28:32.212578] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.232 ms 00:33:07.557 [2024-12-15 02:28:32.212586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.557 [2024-12-15 02:28:32.212625] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.557 [2024-12-15 02:28:32.212634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:33:07.557 [2024-12-15 02:28:32.212644] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:07.557 [2024-12-15 02:28:32.212653] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.557 [2024-12-15 02:28:32.212724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.557 [2024-12-15 02:28:32.212735] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:33:07.557 [2024-12-15 02:28:32.212745] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.031 ms 00:33:07.557 [2024-12-15 02:28:32.212754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.557 [2024-12-15 02:28:32.212770] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:33:07.557 [2024-12-15 02:28:32.212785] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212798] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212806] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212813] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212821] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212829] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212837] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212845] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212853] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212861] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212885] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:33:07.557 [2024-12-15 02:28:32.212893] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212902] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212910] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212918] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212925] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212933] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212941] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212949] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212957] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212964] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212972] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212980] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212988] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.212995] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213002] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213010] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213026] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213040] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213065] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213073] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213081] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213088] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213095] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213102] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213115] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213123] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213131] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213140] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213159] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213167] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213175] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213182] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213190] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213217] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213224] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213233] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213241] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213249] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213257] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213266] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213273] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213281] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213289] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213296] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213304] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213319] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213353] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213361] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213369] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213384] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213391] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213400] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213408] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213416] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213424] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213432] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213472] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213480] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213488] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213495] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213503] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213511] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213518] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213537] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213544] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:33:07.558 [2024-12-15 02:28:32.213552] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:33:07.559 [2024-12-15 02:28:32.213560] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:33:07.559 [2024-12-15 02:28:32.213567] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:33:07.559 [2024-12-15 02:28:32.213574] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:33:07.559 [2024-12-15 02:28:32.213582] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:33:07.559 [2024-12-15 02:28:32.213589] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:33:07.559 [2024-12-15 02:28:32.213597] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:33:07.559 [2024-12-15 02:28:32.213604] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:33:07.559 [2024-12-15 02:28:32.213620] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:33:07.559 [2024-12-15 02:28:32.213629] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a227f38-bb55-495b-8e3b-2041cd8dcaa2 00:33:07.559 [2024-12-15 02:28:32.213636] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 0 00:33:07.559 [2024-12-15 02:28:32.213644] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 32 00:33:07.559 [2024-12-15 02:28:32.213652] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 0 00:33:07.559 [2024-12-15 02:28:32.213661] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: inf 00:33:07.559 [2024-12-15 02:28:32.213669] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:33:07.559 [2024-12-15 02:28:32.213678] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:33:07.559 [2024-12-15 02:28:32.213701] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:33:07.559 [2024-12-15 02:28:32.213708] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:33:07.559 [2024-12-15 02:28:32.213716] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:33:07.559 [2024-12-15 02:28:32.213724] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.559 [2024-12-15 02:28:32.213732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:33:07.559 [2024-12-15 02:28:32.213741] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.955 ms 00:33:07.559 [2024-12-15 02:28:32.213752] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.559 [2024-12-15 02:28:32.229516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.559 [2024-12-15 02:28:32.229559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:33:07.559 [2024-12-15 02:28:32.229573] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.747 ms 00:33:07.559 [2024-12-15 02:28:32.229582] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.559 [2024-12-15 02:28:32.230024] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:07.559 [2024-12-15 02:28:32.230036] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:33:07.559 [2024-12-15 02:28:32.230054] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.415 ms 00:33:07.559 [2024-12-15 02:28:32.230063] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.559 [2024-12-15 02:28:32.270496] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.559 [2024-12-15 02:28:32.270540] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:07.559 [2024-12-15 02:28:32.270554] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.559 [2024-12-15 02:28:32.270565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.559 [2024-12-15 02:28:32.270652] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.559 [2024-12-15 02:28:32.270663] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:07.559 [2024-12-15 02:28:32.270679] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.559 [2024-12-15 02:28:32.270689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.559 [2024-12-15 02:28:32.270757] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.559 [2024-12-15 02:28:32.270769] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:07.559 [2024-12-15 02:28:32.270779] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.559 [2024-12-15 02:28:32.270788] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.559 [2024-12-15 02:28:32.270805] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.559 [2024-12-15 02:28:32.270815] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:07.559 [2024-12-15 02:28:32.270825] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.559 [2024-12-15 02:28:32.270837] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.361378] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.820 [2024-12-15 02:28:32.361440] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:07.820 [2024-12-15 02:28:32.361453] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.820 [2024-12-15 02:28:32.361462] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.436299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.820 [2024-12-15 02:28:32.436353] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:07.820 [2024-12-15 02:28:32.436367] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.820 [2024-12-15 02:28:32.436384] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.436489] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.820 [2024-12-15 02:28:32.436502] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:07.820 [2024-12-15 02:28:32.436512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.820 [2024-12-15 02:28:32.436522] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.436570] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.820 [2024-12-15 02:28:32.436581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:07.820 [2024-12-15 02:28:32.436591] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.820 [2024-12-15 02:28:32.436600] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.436690] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.820 [2024-12-15 02:28:32.436701] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:07.820 [2024-12-15 02:28:32.436711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.820 [2024-12-15 02:28:32.436720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.436749] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.820 [2024-12-15 02:28:32.436758] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:33:07.820 [2024-12-15 02:28:32.436767] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.820 [2024-12-15 02:28:32.436775] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.436824] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.820 [2024-12-15 02:28:32.436833] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:07.820 [2024-12-15 02:28:32.436842] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.820 [2024-12-15 02:28:32.436851] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.436905] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:33:07.820 [2024-12-15 02:28:32.436915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:07.820 [2024-12-15 02:28:32.436924] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:33:07.820 [2024-12-15 02:28:32.436933] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:07.820 [2024-12-15 02:28:32.437093] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 228.900 ms, result 0 00:33:08.758 00:33:08.758 00:33:08.758 02:28:33 ftl.ftl_restore_fast -- ftl/restore.sh@76 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:33:11.300 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:33:11.300 02:28:35 ftl.ftl_restore_fast -- ftl/restore.sh@79 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --if=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --ob=ftl0 --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --seek=131072 00:33:11.300 [2024-12-15 02:28:35.508444] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:33:11.300 [2024-12-15 02:28:35.508535] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87112 ] 00:33:11.300 [2024-12-15 02:28:35.662530] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:33:11.300 [2024-12-15 02:28:35.772182] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:33:11.560 [2024-12-15 02:28:36.097967] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:11.560 [2024-12-15 02:28:36.098084] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:33:11.560 [2024-12-15 02:28:36.263032] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.560 [2024-12-15 02:28:36.263097] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:33:11.560 [2024-12-15 02:28:36.263114] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.006 ms 00:33:11.560 [2024-12-15 02:28:36.263124] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.560 [2024-12-15 02:28:36.263183] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.560 [2024-12-15 02:28:36.263221] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:33:11.560 [2024-12-15 02:28:36.263232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:33:11.560 [2024-12-15 02:28:36.263241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.560 [2024-12-15 02:28:36.263263] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:33:11.560 [2024-12-15 02:28:36.263956] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:33:11.560 [2024-12-15 02:28:36.263986] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.560 [2024-12-15 02:28:36.263995] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:33:11.560 [2024-12-15 02:28:36.264007] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.729 ms 00:33:11.560 [2024-12-15 02:28:36.264016] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.264503] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:33:11.561 [2024-12-15 02:28:36.264565] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.264581] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:33:11.561 [2024-12-15 02:28:36.264592] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.068 ms 00:33:11.561 [2024-12-15 02:28:36.264601] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.264664] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.264674] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:33:11.561 [2024-12-15 02:28:36.264682] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.044 ms 00:33:11.561 [2024-12-15 02:28:36.264689] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.264988] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.265011] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:33:11.561 [2024-12-15 02:28:36.265021] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.262 ms 00:33:11.561 [2024-12-15 02:28:36.265030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.265109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.265120] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:33:11.561 [2024-12-15 02:28:36.265131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:11.561 [2024-12-15 02:28:36.265139] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.265167] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.265187] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:33:11.561 [2024-12-15 02:28:36.265228] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.011 ms 00:33:11.561 [2024-12-15 02:28:36.265237] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.265260] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:33:11.561 [2024-12-15 02:28:36.270277] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.270387] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:33:11.561 [2024-12-15 02:28:36.270398] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.023 ms 00:33:11.561 [2024-12-15 02:28:36.270407] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.270449] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.270461] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:33:11.561 [2024-12-15 02:28:36.270471] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.016 ms 00:33:11.561 [2024-12-15 02:28:36.270480] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.270537] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:33:11.561 [2024-12-15 02:28:36.270564] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:33:11.561 [2024-12-15 02:28:36.270607] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:33:11.561 [2024-12-15 02:28:36.270625] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:33:11.561 [2024-12-15 02:28:36.270737] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:33:11.561 [2024-12-15 02:28:36.270751] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:33:11.561 [2024-12-15 02:28:36.270763] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:33:11.561 [2024-12-15 02:28:36.270775] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:33:11.561 [2024-12-15 02:28:36.270787] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:33:11.561 [2024-12-15 02:28:36.270800] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:33:11.561 [2024-12-15 02:28:36.270810] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:33:11.561 [2024-12-15 02:28:36.270818] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:33:11.561 [2024-12-15 02:28:36.270826] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:33:11.561 [2024-12-15 02:28:36.270833] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.270842] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:33:11.561 [2024-12-15 02:28:36.270852] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.300 ms 00:33:11.561 [2024-12-15 02:28:36.270859] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.270947] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.561 [2024-12-15 02:28:36.270956] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:33:11.561 [2024-12-15 02:28:36.270965] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:33:11.561 [2024-12-15 02:28:36.270975] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.561 [2024-12-15 02:28:36.271072] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:33:11.561 [2024-12-15 02:28:36.271084] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:33:11.561 [2024-12-15 02:28:36.271093] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:11.561 [2024-12-15 02:28:36.271102] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271111] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:33:11.561 [2024-12-15 02:28:36.271120] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271129] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:33:11.561 [2024-12-15 02:28:36.271138] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:33:11.561 [2024-12-15 02:28:36.271146] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271153] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:11.561 [2024-12-15 02:28:36.271161] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:33:11.561 [2024-12-15 02:28:36.271169] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:33:11.561 [2024-12-15 02:28:36.271176] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:33:11.561 [2024-12-15 02:28:36.271189] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:33:11.561 [2024-12-15 02:28:36.271216] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:33:11.561 [2024-12-15 02:28:36.271232] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271240] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:33:11.561 [2024-12-15 02:28:36.271247] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:33:11.561 [2024-12-15 02:28:36.271254] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271262] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:33:11.561 [2024-12-15 02:28:36.271272] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271280] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:11.561 [2024-12-15 02:28:36.271287] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:33:11.561 [2024-12-15 02:28:36.271295] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271302] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:11.561 [2024-12-15 02:28:36.271308] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:33:11.561 [2024-12-15 02:28:36.271315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271322] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:11.561 [2024-12-15 02:28:36.271328] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:33:11.561 [2024-12-15 02:28:36.271335] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271341] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:33:11.561 [2024-12-15 02:28:36.271348] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:33:11.561 [2024-12-15 02:28:36.271356] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271363] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:11.561 [2024-12-15 02:28:36.271369] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:33:11.561 [2024-12-15 02:28:36.271376] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:33:11.561 [2024-12-15 02:28:36.271382] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:33:11.561 [2024-12-15 02:28:36.271388] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:33:11.561 [2024-12-15 02:28:36.271394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:33:11.561 [2024-12-15 02:28:36.271401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.561 [2024-12-15 02:28:36.271407] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:33:11.561 [2024-12-15 02:28:36.271415] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:33:11.562 [2024-12-15 02:28:36.271424] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.562 [2024-12-15 02:28:36.271431] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:33:11.562 [2024-12-15 02:28:36.271439] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:33:11.562 [2024-12-15 02:28:36.271452] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:33:11.562 [2024-12-15 02:28:36.271459] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:33:11.562 [2024-12-15 02:28:36.271470] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:33:11.562 [2024-12-15 02:28:36.271477] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:33:11.562 [2024-12-15 02:28:36.271485] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:33:11.562 [2024-12-15 02:28:36.271492] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:33:11.562 [2024-12-15 02:28:36.271501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:33:11.562 [2024-12-15 02:28:36.271509] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:33:11.562 [2024-12-15 02:28:36.271518] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:33:11.562 [2024-12-15 02:28:36.271528] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:11.562 [2024-12-15 02:28:36.271536] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:33:11.562 [2024-12-15 02:28:36.271543] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:33:11.562 [2024-12-15 02:28:36.271550] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:33:11.562 [2024-12-15 02:28:36.271559] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:33:11.562 [2024-12-15 02:28:36.271567] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:33:11.562 [2024-12-15 02:28:36.271574] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:33:11.562 [2024-12-15 02:28:36.271580] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:33:11.562 [2024-12-15 02:28:36.271588] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:33:11.562 [2024-12-15 02:28:36.271595] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:33:11.562 [2024-12-15 02:28:36.271602] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:33:11.562 [2024-12-15 02:28:36.271609] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:33:11.562 [2024-12-15 02:28:36.271616] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:33:11.562 [2024-12-15 02:28:36.271625] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:33:11.562 [2024-12-15 02:28:36.271633] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:33:11.562 [2024-12-15 02:28:36.271641] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:33:11.562 [2024-12-15 02:28:36.271649] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:33:11.562 [2024-12-15 02:28:36.271657] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:33:11.562 [2024-12-15 02:28:36.271665] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:33:11.562 [2024-12-15 02:28:36.271677] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:33:11.562 [2024-12-15 02:28:36.271687] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:33:11.562 [2024-12-15 02:28:36.271695] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.562 [2024-12-15 02:28:36.271703] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:33:11.562 [2024-12-15 02:28:36.271713] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.692 ms 00:33:11.562 [2024-12-15 02:28:36.271720] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.562 [2024-12-15 02:28:36.303521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.562 [2024-12-15 02:28:36.303565] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:33:11.562 [2024-12-15 02:28:36.303577] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 31.757 ms 00:33:11.562 [2024-12-15 02:28:36.303586] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.562 [2024-12-15 02:28:36.303673] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.562 [2024-12-15 02:28:36.303682] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:33:11.562 [2024-12-15 02:28:36.303694] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.062 ms 00:33:11.562 [2024-12-15 02:28:36.303703] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.354640] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.354693] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:33:11.824 [2024-12-15 02:28:36.354708] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 50.882 ms 00:33:11.824 [2024-12-15 02:28:36.354718] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.354773] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.354784] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:33:11.824 [2024-12-15 02:28:36.354793] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:33:11.824 [2024-12-15 02:28:36.354802] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.354917] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.354930] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:33:11.824 [2024-12-15 02:28:36.354941] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.056 ms 00:33:11.824 [2024-12-15 02:28:36.354949] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.355086] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.355100] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:33:11.824 [2024-12-15 02:28:36.355109] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.122 ms 00:33:11.824 [2024-12-15 02:28:36.355120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.373299] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.373344] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:33:11.824 [2024-12-15 02:28:36.373356] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 18.140 ms 00:33:11.824 [2024-12-15 02:28:36.373365] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.373507] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 2, empty chunks = 2 00:33:11.824 [2024-12-15 02:28:36.373523] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:33:11.824 [2024-12-15 02:28:36.373535] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.373547] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:33:11.824 [2024-12-15 02:28:36.373557] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:33:11.824 [2024-12-15 02:28:36.373565] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.385873] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.385915] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:33:11.824 [2024-12-15 02:28:36.385927] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.291 ms 00:33:11.824 [2024-12-15 02:28:36.385936] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.386089] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.386101] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:33:11.824 [2024-12-15 02:28:36.386111] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.126 ms 00:33:11.824 [2024-12-15 02:28:36.386127] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.386179] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.386191] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:33:11.824 [2024-12-15 02:28:36.386232] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:33:11.824 [2024-12-15 02:28:36.386241] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.386847] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.386877] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:33:11.824 [2024-12-15 02:28:36.386887] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.563 ms 00:33:11.824 [2024-12-15 02:28:36.386896] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.824 [2024-12-15 02:28:36.386923] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:33:11.824 [2024-12-15 02:28:36.386936] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.824 [2024-12-15 02:28:36.386946] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:33:11.824 [2024-12-15 02:28:36.386955] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.015 ms 00:33:11.824 [2024-12-15 02:28:36.386963] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.825 [2024-12-15 02:28:36.401039] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:33:11.825 [2024-12-15 02:28:36.401217] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.825 [2024-12-15 02:28:36.401230] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:33:11.825 [2024-12-15 02:28:36.401243] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 14.235 ms 00:33:11.825 [2024-12-15 02:28:36.401255] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.825 [2024-12-15 02:28:36.403418] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.825 [2024-12-15 02:28:36.403455] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:33:11.825 [2024-12-15 02:28:36.403467] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.136 ms 00:33:11.825 [2024-12-15 02:28:36.403477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.825 [2024-12-15 02:28:36.403579] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.825 [2024-12-15 02:28:36.403590] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:33:11.825 [2024-12-15 02:28:36.403601] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.042 ms 00:33:11.825 [2024-12-15 02:28:36.403612] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.825 [2024-12-15 02:28:36.403637] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.825 [2024-12-15 02:28:36.403654] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:33:11.825 [2024-12-15 02:28:36.403663] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:33:11.825 [2024-12-15 02:28:36.403672] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.825 [2024-12-15 02:28:36.403710] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:33:11.825 [2024-12-15 02:28:36.403721] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.825 [2024-12-15 02:28:36.403732] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:33:11.825 [2024-12-15 02:28:36.403743] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:33:11.825 [2024-12-15 02:28:36.403751] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.825 [2024-12-15 02:28:36.431521] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.825 [2024-12-15 02:28:36.431570] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:33:11.825 [2024-12-15 02:28:36.431583] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.747 ms 00:33:11.825 [2024-12-15 02:28:36.431592] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.825 [2024-12-15 02:28:36.431679] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:33:11.825 [2024-12-15 02:28:36.431690] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:33:11.825 [2024-12-15 02:28:36.431701] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.039 ms 00:33:11.825 [2024-12-15 02:28:36.431710] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:33:11.825 [2024-12-15 02:28:36.434365] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 170.767 ms, result 0 00:33:12.768  [2024-12-15T02:28:38.475Z] Copying: 19/1024 [MB] (19 MBps) [2024-12-15T02:28:39.860Z] Copying: 40/1024 [MB] (21 MBps) [2024-12-15T02:28:40.802Z] Copying: 62/1024 [MB] (22 MBps) [2024-12-15T02:28:41.741Z] Copying: 85/1024 [MB] (22 MBps) [2024-12-15T02:28:42.684Z] Copying: 113/1024 [MB] (27 MBps) [2024-12-15T02:28:43.628Z] Copying: 134/1024 [MB] (21 MBps) [2024-12-15T02:28:44.571Z] Copying: 146/1024 [MB] (11 MBps) [2024-12-15T02:28:45.513Z] Copying: 158/1024 [MB] (11 MBps) [2024-12-15T02:28:46.474Z] Copying: 176/1024 [MB] (18 MBps) [2024-12-15T02:28:47.463Z] Copying: 198/1024 [MB] (21 MBps) [2024-12-15T02:28:48.847Z] Copying: 211/1024 [MB] (13 MBps) [2024-12-15T02:28:49.790Z] Copying: 223/1024 [MB] (11 MBps) [2024-12-15T02:28:50.733Z] Copying: 234/1024 [MB] (11 MBps) [2024-12-15T02:28:51.674Z] Copying: 246/1024 [MB] (11 MBps) [2024-12-15T02:28:52.618Z] Copying: 257/1024 [MB] (11 MBps) [2024-12-15T02:28:53.563Z] Copying: 268/1024 [MB] (11 MBps) [2024-12-15T02:28:54.506Z] Copying: 279/1024 [MB] (11 MBps) [2024-12-15T02:28:55.450Z] Copying: 290/1024 [MB] (10 MBps) [2024-12-15T02:28:56.836Z] Copying: 301/1024 [MB] (11 MBps) [2024-12-15T02:28:57.780Z] Copying: 312/1024 [MB] (11 MBps) [2024-12-15T02:28:58.724Z] Copying: 323/1024 [MB] (10 MBps) [2024-12-15T02:28:59.667Z] Copying: 334/1024 [MB] (10 MBps) [2024-12-15T02:29:00.611Z] Copying: 344/1024 [MB] (10 MBps) [2024-12-15T02:29:01.555Z] Copying: 355/1024 [MB] (11 MBps) [2024-12-15T02:29:02.498Z] Copying: 366/1024 [MB] (10 MBps) [2024-12-15T02:29:03.884Z] Copying: 377/1024 [MB] (11 MBps) [2024-12-15T02:29:04.455Z] Copying: 388/1024 [MB] (11 MBps) [2024-12-15T02:29:05.839Z] Copying: 400/1024 [MB] (11 MBps) [2024-12-15T02:29:06.780Z] Copying: 411/1024 [MB] (11 MBps) [2024-12-15T02:29:07.723Z] Copying: 423/1024 [MB] (11 MBps) [2024-12-15T02:29:08.665Z] Copying: 434/1024 [MB] (11 MBps) [2024-12-15T02:29:09.609Z] Copying: 445/1024 [MB] (11 MBps) [2024-12-15T02:29:10.553Z] Copying: 457/1024 [MB] (11 MBps) [2024-12-15T02:29:11.496Z] Copying: 468/1024 [MB] (11 MBps) [2024-12-15T02:29:12.487Z] Copying: 479/1024 [MB] (11 MBps) [2024-12-15T02:29:13.874Z] Copying: 490/1024 [MB] (11 MBps) [2024-12-15T02:29:14.818Z] Copying: 502/1024 [MB] (11 MBps) [2024-12-15T02:29:15.760Z] Copying: 513/1024 [MB] (11 MBps) [2024-12-15T02:29:16.702Z] Copying: 524/1024 [MB] (11 MBps) [2024-12-15T02:29:17.644Z] Copying: 536/1024 [MB] (11 MBps) [2024-12-15T02:29:18.589Z] Copying: 547/1024 [MB] (11 MBps) [2024-12-15T02:29:19.533Z] Copying: 558/1024 [MB] (11 MBps) [2024-12-15T02:29:20.474Z] Copying: 570/1024 [MB] (11 MBps) [2024-12-15T02:29:21.862Z] Copying: 581/1024 [MB] (11 MBps) [2024-12-15T02:29:22.805Z] Copying: 592/1024 [MB] (11 MBps) [2024-12-15T02:29:23.750Z] Copying: 604/1024 [MB] (11 MBps) [2024-12-15T02:29:24.694Z] Copying: 616/1024 [MB] (11 MBps) [2024-12-15T02:29:25.643Z] Copying: 626/1024 [MB] (10 MBps) [2024-12-15T02:29:26.587Z] Copying: 637/1024 [MB] (10 MBps) [2024-12-15T02:29:27.532Z] Copying: 647/1024 [MB] (10 MBps) [2024-12-15T02:29:28.475Z] Copying: 658/1024 [MB] (10 MBps) [2024-12-15T02:29:29.861Z] Copying: 670/1024 [MB] (11 MBps) [2024-12-15T02:29:30.804Z] Copying: 681/1024 [MB] (11 MBps) [2024-12-15T02:29:31.747Z] Copying: 692/1024 [MB] (11 MBps) [2024-12-15T02:29:32.689Z] Copying: 703/1024 [MB] (11 MBps) [2024-12-15T02:29:33.631Z] Copying: 715/1024 [MB] (11 MBps) [2024-12-15T02:29:34.575Z] Copying: 726/1024 [MB] (11 MBps) [2024-12-15T02:29:35.517Z] Copying: 737/1024 [MB] (11 MBps) [2024-12-15T02:29:36.460Z] Copying: 749/1024 [MB] (11 MBps) [2024-12-15T02:29:37.848Z] Copying: 759/1024 [MB] (10 MBps) [2024-12-15T02:29:38.447Z] Copying: 770/1024 [MB] (11 MBps) [2024-12-15T02:29:39.833Z] Copying: 782/1024 [MB] (11 MBps) [2024-12-15T02:29:40.778Z] Copying: 793/1024 [MB] (11 MBps) [2024-12-15T02:29:41.722Z] Copying: 804/1024 [MB] (10 MBps) [2024-12-15T02:29:42.667Z] Copying: 833744/1048576 [kB] (10128 kBps) [2024-12-15T02:29:43.609Z] Copying: 824/1024 [MB] (10 MBps) [2024-12-15T02:29:44.553Z] Copying: 835/1024 [MB] (10 MBps) [2024-12-15T02:29:45.494Z] Copying: 864712/1048576 [kB] (9660 kBps) [2024-12-15T02:29:46.878Z] Copying: 874868/1048576 [kB] (10156 kBps) [2024-12-15T02:29:47.450Z] Copying: 866/1024 [MB] (12 MBps) [2024-12-15T02:29:48.839Z] Copying: 878/1024 [MB] (11 MBps) [2024-12-15T02:29:49.783Z] Copying: 898/1024 [MB] (19 MBps) [2024-12-15T02:29:50.726Z] Copying: 916/1024 [MB] (17 MBps) [2024-12-15T02:29:51.659Z] Copying: 934/1024 [MB] (18 MBps) [2024-12-15T02:29:52.591Z] Copying: 960/1024 [MB] (25 MBps) [2024-12-15T02:29:53.528Z] Copying: 1010/1024 [MB] (49 MBps) [2024-12-15T02:29:54.102Z] Copying: 1023/1024 [MB] (13 MBps) [2024-12-15T02:29:54.102Z] Copying: 1024/1024 [MB] (average 13 MBps)[2024-12-15 02:29:53.954971] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.337 [2024-12-15 02:29:53.955081] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:34:29.337 [2024-12-15 02:29:53.955099] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:29.337 [2024-12-15 02:29:53.955109] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.337 [2024-12-15 02:29:53.957285] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:34:29.337 [2024-12-15 02:29:53.962748] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.337 [2024-12-15 02:29:53.962801] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:34:29.337 [2024-12-15 02:29:53.962814] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 5.404 ms 00:34:29.337 [2024-12-15 02:29:53.962823] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.337 [2024-12-15 02:29:53.974416] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.337 [2024-12-15 02:29:53.974471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:34:29.337 [2024-12-15 02:29:53.974485] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 8.665 ms 00:34:29.337 [2024-12-15 02:29:53.974494] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.337 [2024-12-15 02:29:53.974527] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.337 [2024-12-15 02:29:53.974537] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:34:29.337 [2024-12-15 02:29:53.974547] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:29.337 [2024-12-15 02:29:53.974555] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.337 [2024-12-15 02:29:53.974626] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.337 [2024-12-15 02:29:53.974640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:34:29.337 [2024-12-15 02:29:53.974650] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.030 ms 00:34:29.337 [2024-12-15 02:29:53.974657] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.337 [2024-12-15 02:29:53.974671] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:34:29.337 [2024-12-15 02:29:53.974684] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 128000 / 261120 wr_cnt: 1 state: open 00:34:29.337 [2024-12-15 02:29:53.974695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974704] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974712] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974720] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974728] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974735] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974743] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974750] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974759] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974767] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974774] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974782] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974790] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974797] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:34:29.337 [2024-12-15 02:29:53.974805] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974812] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974827] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974842] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974868] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974876] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974891] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974900] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974908] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974922] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974930] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974938] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974945] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974952] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974959] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974966] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974975] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974982] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974989] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.974997] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975004] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975011] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975018] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975028] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975046] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975055] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975064] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975072] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975082] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975090] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975098] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975127] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975137] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975145] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975154] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975181] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975189] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975213] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975221] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975229] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975237] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975244] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975252] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975260] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975268] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975275] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975283] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975290] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975298] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975305] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975314] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975326] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975334] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975343] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975352] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975360] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975368] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975379] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975387] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975395] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975405] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975414] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975423] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975430] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975439] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975448] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975456] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:34:29.338 [2024-12-15 02:29:53.975473] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:34:29.339 [2024-12-15 02:29:53.975482] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:34:29.339 [2024-12-15 02:29:53.975490] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:34:29.339 [2024-12-15 02:29:53.975499] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:34:29.339 [2024-12-15 02:29:53.975507] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:34:29.339 [2024-12-15 02:29:53.975515] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:34:29.339 [2024-12-15 02:29:53.975526] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:34:29.339 [2024-12-15 02:29:53.975542] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:34:29.339 [2024-12-15 02:29:53.975551] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a227f38-bb55-495b-8e3b-2041cd8dcaa2 00:34:29.339 [2024-12-15 02:29:53.975559] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 128000 00:34:29.339 [2024-12-15 02:29:53.975567] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 128032 00:34:29.339 [2024-12-15 02:29:53.975575] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 128000 00:34:29.339 [2024-12-15 02:29:53.975583] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0003 00:34:29.339 [2024-12-15 02:29:53.975593] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:34:29.339 [2024-12-15 02:29:53.975601] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:34:29.339 [2024-12-15 02:29:53.975610] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:34:29.339 [2024-12-15 02:29:53.975617] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:34:29.339 [2024-12-15 02:29:53.975623] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:34:29.339 [2024-12-15 02:29:53.975632] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.339 [2024-12-15 02:29:53.975640] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:34:29.339 [2024-12-15 02:29:53.975648] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.961 ms 00:34:29.339 [2024-12-15 02:29:53.975656] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.339 [2024-12-15 02:29:53.989610] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.339 [2024-12-15 02:29:53.989659] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:34:29.339 [2024-12-15 02:29:53.989678] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 13.936 ms 00:34:29.339 [2024-12-15 02:29:53.989686] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.339 [2024-12-15 02:29:53.990111] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:29.339 [2024-12-15 02:29:53.990152] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:34:29.339 [2024-12-15 02:29:53.990163] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.399 ms 00:34:29.339 [2024-12-15 02:29:53.990171] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.339 [2024-12-15 02:29:54.026857] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.339 [2024-12-15 02:29:54.026921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:29.339 [2024-12-15 02:29:54.026932] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.339 [2024-12-15 02:29:54.026941] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.339 [2024-12-15 02:29:54.027017] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.339 [2024-12-15 02:29:54.027026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:29.339 [2024-12-15 02:29:54.027035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.339 [2024-12-15 02:29:54.027044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.339 [2024-12-15 02:29:54.027101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.339 [2024-12-15 02:29:54.027112] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:29.339 [2024-12-15 02:29:54.027126] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.339 [2024-12-15 02:29:54.027134] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.339 [2024-12-15 02:29:54.027152] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.339 [2024-12-15 02:29:54.027161] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:29.339 [2024-12-15 02:29:54.027170] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.339 [2024-12-15 02:29:54.027179] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.110892] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.600 [2024-12-15 02:29:54.110959] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:29.600 [2024-12-15 02:29:54.110974] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.600 [2024-12-15 02:29:54.110982] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.179909] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.600 [2024-12-15 02:29:54.179976] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:29.600 [2024-12-15 02:29:54.179995] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.600 [2024-12-15 02:29:54.180004] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.180087] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.600 [2024-12-15 02:29:54.180098] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:29.600 [2024-12-15 02:29:54.180108] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.600 [2024-12-15 02:29:54.180120] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.180158] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.600 [2024-12-15 02:29:54.180169] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:29.600 [2024-12-15 02:29:54.180178] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.600 [2024-12-15 02:29:54.180186] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.180471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.600 [2024-12-15 02:29:54.180501] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:29.600 [2024-12-15 02:29:54.180511] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.600 [2024-12-15 02:29:54.180520] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.180551] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.600 [2024-12-15 02:29:54.180562] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:34:29.600 [2024-12-15 02:29:54.180572] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.600 [2024-12-15 02:29:54.180581] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.180623] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.600 [2024-12-15 02:29:54.180634] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:29.600 [2024-12-15 02:29:54.180643] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.600 [2024-12-15 02:29:54.180652] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.180703] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:34:29.600 [2024-12-15 02:29:54.180722] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:29.600 [2024-12-15 02:29:54.180732] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:34:29.600 [2024-12-15 02:29:54.180741] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:29.600 [2024-12-15 02:29:54.180882] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 227.300 ms, result 0 00:34:31.513 00:34:31.513 00:34:31.513 02:29:55 ftl.ftl_restore_fast -- ftl/restore.sh@80 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_dd --ib=ftl0 --of=/home/vagrant/spdk_repo/spdk/test/ftl/testfile --json=/home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json --skip=131072 --count=262144 00:34:31.513 [2024-12-15 02:29:55.842875] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:34:31.513 [2024-12-15 02:29:55.843028] [ DPDK EAL parameters: spdk_dd --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid87902 ] 00:34:31.513 [2024-12-15 02:29:56.007434] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:34:31.513 [2024-12-15 02:29:56.131819] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:34:31.775 [2024-12-15 02:29:56.430940] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:31.775 [2024-12-15 02:29:56.431030] bdev.c:8697:bdev_open_ext: *NOTICE*: Currently unable to find bdev with name: nvc0n1 00:34:32.038 [2024-12-15 02:29:56.593375] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.038 [2024-12-15 02:29:56.593448] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Check configuration 00:34:32.038 [2024-12-15 02:29:56.593464] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:34:32.038 [2024-12-15 02:29:56.593473] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.038 [2024-12-15 02:29:56.593530] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.038 [2024-12-15 02:29:56.593543] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:34:32.038 [2024-12-15 02:29:56.593552] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.037 ms 00:34:32.038 [2024-12-15 02:29:56.593560] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.038 [2024-12-15 02:29:56.593583] mngt/ftl_mngt_bdev.c: 196:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using nvc0n1p0 as write buffer cache 00:34:32.038 [2024-12-15 02:29:56.594335] mngt/ftl_mngt_bdev.c: 236:ftl_mngt_open_cache_bdev: *NOTICE*: [FTL][ftl0] Using bdev as NV Cache device 00:34:32.038 [2024-12-15 02:29:56.594366] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.038 [2024-12-15 02:29:56.594375] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:34:32.038 [2024-12-15 02:29:56.594384] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.789 ms 00:34:32.038 [2024-12-15 02:29:56.594398] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.038 [2024-12-15 02:29:56.595331] mngt/ftl_mngt_md.c: 455:ftl_mngt_load_sb: *NOTICE*: [FTL][ftl0] SHM: clean 1, shm_clean 1 00:34:32.038 [2024-12-15 02:29:56.595400] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.038 [2024-12-15 02:29:56.595419] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Load super block 00:34:32.038 [2024-12-15 02:29:56.595431] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.076 ms 00:34:32.038 [2024-12-15 02:29:56.595439] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.038 [2024-12-15 02:29:56.595564] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.038 [2024-12-15 02:29:56.595577] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Validate super block 00:34:32.038 [2024-12-15 02:29:56.595587] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.040 ms 00:34:32.038 [2024-12-15 02:29:56.595594] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.038 [2024-12-15 02:29:56.595897] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.038 [2024-12-15 02:29:56.595921] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:34:32.038 [2024-12-15 02:29:56.595931] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.266 ms 00:34:32.038 [2024-12-15 02:29:56.595940] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.038 [2024-12-15 02:29:56.596016] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.038 [2024-12-15 02:29:56.596026] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:34:32.039 [2024-12-15 02:29:56.596035] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.058 ms 00:34:32.039 [2024-12-15 02:29:56.596042] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.039 [2024-12-15 02:29:56.596069] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.039 [2024-12-15 02:29:56.596080] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Register IO device 00:34:32.039 [2024-12-15 02:29:56.596091] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.007 ms 00:34:32.039 [2024-12-15 02:29:56.596099] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.039 [2024-12-15 02:29:56.596121] mngt/ftl_mngt_ioch.c: 57:io_channel_create_cb: *NOTICE*: [FTL][ftl0] FTL IO channel created on app_thread 00:34:32.039 [2024-12-15 02:29:56.600516] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.039 [2024-12-15 02:29:56.600559] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:34:32.039 [2024-12-15 02:29:56.600569] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 4.401 ms 00:34:32.039 [2024-12-15 02:29:56.600578] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.039 [2024-12-15 02:29:56.600619] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.039 [2024-12-15 02:29:56.600628] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Decorate bands 00:34:32.039 [2024-12-15 02:29:56.600637] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.013 ms 00:34:32.039 [2024-12-15 02:29:56.600644] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.039 [2024-12-15 02:29:56.600708] ftl_layout.c: 613:ftl_layout_setup: *NOTICE*: [FTL][ftl0] FTL layout setup mode 0 00:34:32.039 [2024-12-15 02:29:56.600734] upgrade/ftl_sb_v5.c: 278:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob load 0x150 bytes 00:34:32.039 [2024-12-15 02:29:56.600777] upgrade/ftl_sb_v5.c: 287:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] base layout blob load 0x48 bytes 00:34:32.039 [2024-12-15 02:29:56.600793] upgrade/ftl_sb_v5.c: 294:ftl_superblock_v5_load_blob_area: *NOTICE*: [FTL][ftl0] layout blob load 0x190 bytes 00:34:32.039 [2024-12-15 02:29:56.600901] upgrade/ftl_sb_v5.c: 92:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] nvc layout blob store 0x150 bytes 00:34:32.039 [2024-12-15 02:29:56.600913] upgrade/ftl_sb_v5.c: 101:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] base layout blob store 0x48 bytes 00:34:32.039 [2024-12-15 02:29:56.600924] upgrade/ftl_sb_v5.c: 109:ftl_superblock_v5_store_blob_area: *NOTICE*: [FTL][ftl0] layout blob store 0x190 bytes 00:34:32.039 [2024-12-15 02:29:56.600935] ftl_layout.c: 685:ftl_layout_setup: *NOTICE*: [FTL][ftl0] Base device capacity: 103424.00 MiB 00:34:32.039 [2024-12-15 02:29:56.600944] ftl_layout.c: 687:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache device capacity: 5171.00 MiB 00:34:32.039 [2024-12-15 02:29:56.600955] ftl_layout.c: 689:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P entries: 20971520 00:34:32.039 [2024-12-15 02:29:56.600963] ftl_layout.c: 690:ftl_layout_setup: *NOTICE*: [FTL][ftl0] L2P address size: 4 00:34:32.039 [2024-12-15 02:29:56.600971] ftl_layout.c: 691:ftl_layout_setup: *NOTICE*: [FTL][ftl0] P2L checkpoint pages: 2048 00:34:32.039 [2024-12-15 02:29:56.600978] ftl_layout.c: 692:ftl_layout_setup: *NOTICE*: [FTL][ftl0] NV cache chunk count 5 00:34:32.039 [2024-12-15 02:29:56.600985] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.039 [2024-12-15 02:29:56.600993] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize layout 00:34:32.039 [2024-12-15 02:29:56.601000] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.281 ms 00:34:32.039 [2024-12-15 02:29:56.601009] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.039 [2024-12-15 02:29:56.601097] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.039 [2024-12-15 02:29:56.601107] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Verify layout 00:34:32.039 [2024-12-15 02:29:56.601116] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.070 ms 00:34:32.039 [2024-12-15 02:29:56.601126] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.039 [2024-12-15 02:29:56.601240] ftl_layout.c: 768:ftl_layout_dump: *NOTICE*: [FTL][ftl0] NV cache layout: 00:34:32.039 [2024-12-15 02:29:56.601255] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb 00:34:32.039 [2024-12-15 02:29:56.601265] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601274] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601283] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region l2p 00:34:32.039 [2024-12-15 02:29:56.601292] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601299] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 80.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601307] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md 00:34:32.039 [2024-12-15 02:29:56.601315] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601323] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:32.039 [2024-12-15 02:29:56.601331] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region band_md_mirror 00:34:32.039 [2024-12-15 02:29:56.601339] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 80.62 MiB 00:34:32.039 [2024-12-15 02:29:56.601347] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.50 MiB 00:34:32.039 [2024-12-15 02:29:56.601354] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md 00:34:32.039 [2024-12-15 02:29:56.601363] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.88 MiB 00:34:32.039 [2024-12-15 02:29:56.601379] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601387] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region nvc_md_mirror 00:34:32.039 [2024-12-15 02:29:56.601394] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 114.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601401] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601409] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l0 00:34:32.039 [2024-12-15 02:29:56.601416] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 81.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601423] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601430] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l1 00:34:32.039 [2024-12-15 02:29:56.601437] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 89.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601444] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601452] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l2 00:34:32.039 [2024-12-15 02:29:56.601458] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 97.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601465] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601473] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region p2l3 00:34:32.039 [2024-12-15 02:29:56.601480] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 105.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601486] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 8.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601494] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md 00:34:32.039 [2024-12-15 02:29:56.601501] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601507] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:32.039 [2024-12-15 02:29:56.601513] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_md_mirror 00:34:32.039 [2024-12-15 02:29:56.601519] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.38 MiB 00:34:32.039 [2024-12-15 02:29:56.601526] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.25 MiB 00:34:32.039 [2024-12-15 02:29:56.601533] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log 00:34:32.039 [2024-12-15 02:29:56.601539] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.62 MiB 00:34:32.039 [2024-12-15 02:29:56.601547] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601554] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region trim_log_mirror 00:34:32.039 [2024-12-15 02:29:56.601561] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 113.75 MiB 00:34:32.039 [2024-12-15 02:29:56.601569] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601577] ftl_layout.c: 775:ftl_layout_dump: *NOTICE*: [FTL][ftl0] Base device layout: 00:34:32.039 [2024-12-15 02:29:56.601585] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region sb_mirror 00:34:32.039 [2024-12-15 02:29:56.601592] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601602] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 0.12 MiB 00:34:32.039 [2024-12-15 02:29:56.601613] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region vmap 00:34:32.039 [2024-12-15 02:29:56.601620] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 102400.25 MiB 00:34:32.039 [2024-12-15 02:29:56.601627] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 3.38 MiB 00:34:32.039 [2024-12-15 02:29:56.601635] ftl_layout.c: 130:dump_region: *NOTICE*: [FTL][ftl0] Region data_btm 00:34:32.039 [2024-12-15 02:29:56.601642] ftl_layout.c: 131:dump_region: *NOTICE*: [FTL][ftl0] offset: 0.25 MiB 00:34:32.039 [2024-12-15 02:29:56.601649] ftl_layout.c: 133:dump_region: *NOTICE*: [FTL][ftl0] blocks: 102400.00 MiB 00:34:32.039 [2024-12-15 02:29:56.601659] upgrade/ftl_sb_v5.c: 408:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - nvc: 00:34:32.039 [2024-12-15 02:29:56.601669] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x0 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:32.040 [2024-12-15 02:29:56.601678] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x2 ver:0 blk_offs:0x20 blk_sz:0x5000 00:34:32.040 [2024-12-15 02:29:56.601689] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x3 ver:2 blk_offs:0x5020 blk_sz:0x80 00:34:32.040 [2024-12-15 02:29:56.601696] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x4 ver:2 blk_offs:0x50a0 blk_sz:0x80 00:34:32.040 [2024-12-15 02:29:56.601704] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xa ver:2 blk_offs:0x5120 blk_sz:0x800 00:34:32.040 [2024-12-15 02:29:56.601713] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xb ver:2 blk_offs:0x5920 blk_sz:0x800 00:34:32.040 [2024-12-15 02:29:56.601721] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xc ver:2 blk_offs:0x6120 blk_sz:0x800 00:34:32.040 [2024-12-15 02:29:56.601728] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xd ver:2 blk_offs:0x6920 blk_sz:0x800 00:34:32.040 [2024-12-15 02:29:56.601735] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xe ver:0 blk_offs:0x7120 blk_sz:0x40 00:34:32.040 [2024-12-15 02:29:56.601742] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xf ver:0 blk_offs:0x7160 blk_sz:0x40 00:34:32.040 [2024-12-15 02:29:56.601749] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x10 ver:1 blk_offs:0x71a0 blk_sz:0x20 00:34:32.040 [2024-12-15 02:29:56.601756] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x11 ver:1 blk_offs:0x71c0 blk_sz:0x20 00:34:32.040 [2024-12-15 02:29:56.601762] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x6 ver:2 blk_offs:0x71e0 blk_sz:0x20 00:34:32.040 [2024-12-15 02:29:56.601769] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x7 ver:2 blk_offs:0x7200 blk_sz:0x20 00:34:32.040 [2024-12-15 02:29:56.601777] upgrade/ftl_sb_v5.c: 416:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x7220 blk_sz:0x13c0e0 00:34:32.040 [2024-12-15 02:29:56.601784] upgrade/ftl_sb_v5.c: 422:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] SB metadata layout - base dev: 00:34:32.040 [2024-12-15 02:29:56.601791] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x1 ver:5 blk_offs:0x0 blk_sz:0x20 00:34:32.040 [2024-12-15 02:29:56.601800] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x20 blk_sz:0x20 00:34:32.040 [2024-12-15 02:29:56.601808] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x9 ver:0 blk_offs:0x40 blk_sz:0x1900000 00:34:32.040 [2024-12-15 02:29:56.601815] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0x5 ver:0 blk_offs:0x1900040 blk_sz:0x360 00:34:32.040 [2024-12-15 02:29:56.601823] upgrade/ftl_sb_v5.c: 430:ftl_superblock_v5_md_layout_dump: *NOTICE*: [FTL][ftl0] Region type:0xfffffffe ver:0 blk_offs:0x19003a0 blk_sz:0x3fc60 00:34:32.040 [2024-12-15 02:29:56.601831] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.601838] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Layout upgrade 00:34:32.040 [2024-12-15 02:29:56.601848] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.677 ms 00:34:32.040 [2024-12-15 02:29:56.601855] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.630228] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.630272] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:34:32.040 [2024-12-15 02:29:56.630285] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 28.330 ms 00:34:32.040 [2024-12-15 02:29:56.630293] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.630384] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.630393] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize band addresses 00:34:32.040 [2024-12-15 02:29:56.630406] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.063 ms 00:34:32.040 [2024-12-15 02:29:56.630415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.673945] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.674005] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:34:32.040 [2024-12-15 02:29:56.674018] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 43.469 ms 00:34:32.040 [2024-12-15 02:29:56.674050] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.674109] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.674121] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:34:32.040 [2024-12-15 02:29:56.674131] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:32.040 [2024-12-15 02:29:56.674140] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.674279] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.674293] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:34:32.040 [2024-12-15 02:29:56.674303] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.061 ms 00:34:32.040 [2024-12-15 02:29:56.674311] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.674446] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.674460] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:34:32.040 [2024-12-15 02:29:56.674469] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.116 ms 00:34:32.040 [2024-12-15 02:29:56.674477] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.690471] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.690518] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:34:32.040 [2024-12-15 02:29:56.690530] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.973 ms 00:34:32.040 [2024-12-15 02:29:56.690539] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.690700] ftl_nv_cache.c:1772:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: full chunks = 4, empty chunks = 0 00:34:32.040 [2024-12-15 02:29:56.690714] ftl_nv_cache.c:1776:ftl_nv_cache_load_state: *NOTICE*: [FTL][ftl0] FTL NV Cache: state loaded successfully 00:34:32.040 [2024-12-15 02:29:56.690725] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.690736] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore NV cache metadata 00:34:32.040 [2024-12-15 02:29:56.690746] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.050 ms 00:34:32.040 [2024-12-15 02:29:56.690754] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.703208] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.703254] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore valid map metadata 00:34:32.040 [2024-12-15 02:29:56.703266] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.433 ms 00:34:32.040 [2024-12-15 02:29:56.703274] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.703401] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.703411] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore band info metadata 00:34:32.040 [2024-12-15 02:29:56.703419] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.104 ms 00:34:32.040 [2024-12-15 02:29:56.703434] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.703492] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.703503] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore trim metadata 00:34:32.040 [2024-12-15 02:29:56.703512] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.002 ms 00:34:32.040 [2024-12-15 02:29:56.703528] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.704101] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.704133] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize P2L checkpointing 00:34:32.040 [2024-12-15 02:29:56.704142] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.532 ms 00:34:32.040 [2024-12-15 02:29:56.704150] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.704171] mngt/ftl_mngt_p2l.c: 169:ftl_mngt_p2l_restore_ckpt: *NOTICE*: [FTL][ftl0] SHM: skipping p2l ckpt restore 00:34:32.040 [2024-12-15 02:29:56.704181] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.704190] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore P2L checkpoints 00:34:32.040 [2024-12-15 02:29:56.704224] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:32.040 [2024-12-15 02:29:56.704233] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.040 [2024-12-15 02:29:56.716835] ftl_l2p_cache.c: 458:ftl_l2p_cache_init: *NOTICE*: l2p maximum resident size is: 9 (of 10) MiB 00:34:32.040 [2024-12-15 02:29:56.717013] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.040 [2024-12-15 02:29:56.717025] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize L2P 00:34:32.040 [2024-12-15 02:29:56.717036] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 12.760 ms 00:34:32.040 [2024-12-15 02:29:56.717044] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.041 [2024-12-15 02:29:56.719273] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.041 [2024-12-15 02:29:56.719309] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Restore L2P 00:34:32.041 [2024-12-15 02:29:56.719319] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 2.201 ms 00:34:32.041 [2024-12-15 02:29:56.719327] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.041 [2024-12-15 02:29:56.719410] mngt/ftl_mngt_band.c: 414:ftl_mngt_finalize_init_bands: *NOTICE*: [FTL][ftl0] SHM: band open P2L map df_id 0x2400000 00:34:32.041 [2024-12-15 02:29:56.719866] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.041 [2024-12-15 02:29:56.719886] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize band initialization 00:34:32.041 [2024-12-15 02:29:56.719896] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.476 ms 00:34:32.041 [2024-12-15 02:29:56.719904] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.041 [2024-12-15 02:29:56.719934] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.041 [2024-12-15 02:29:56.719943] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Start core poller 00:34:32.041 [2024-12-15 02:29:56.719952] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:34:32.041 [2024-12-15 02:29:56.719959] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.041 [2024-12-15 02:29:56.719995] mngt/ftl_mngt_self_test.c: 208:ftl_mngt_self_test: *NOTICE*: [FTL][ftl0] Self test skipped 00:34:32.041 [2024-12-15 02:29:56.720006] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.041 [2024-12-15 02:29:56.720014] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Self test on startup 00:34:32.041 [2024-12-15 02:29:56.720022] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.012 ms 00:34:32.041 [2024-12-15 02:29:56.720030] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.041 [2024-12-15 02:29:56.747205] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.041 [2024-12-15 02:29:56.747259] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL dirty state 00:34:32.041 [2024-12-15 02:29:56.747271] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 27.148 ms 00:34:32.041 [2024-12-15 02:29:56.747280] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.041 [2024-12-15 02:29:56.747369] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:34:32.041 [2024-12-15 02:29:56.747379] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Finalize initialization 00:34:32.041 [2024-12-15 02:29:56.747388] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.036 ms 00:34:32.041 [2024-12-15 02:29:56.747396] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:34:32.041 [2024-12-15 02:29:56.748659] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL startup', duration = 154.802 ms, result 0 00:34:33.426  [2024-12-15T02:29:59.132Z] Copying: 14/1024 [MB] (14 MBps) [2024-12-15T02:30:00.075Z] Copying: 27/1024 [MB] (12 MBps) [2024-12-15T02:30:01.019Z] Copying: 42/1024 [MB] (15 MBps) [2024-12-15T02:30:01.959Z] Copying: 55/1024 [MB] (13 MBps) [2024-12-15T02:30:03.342Z] Copying: 70/1024 [MB] (15 MBps) [2024-12-15T02:30:04.328Z] Copying: 86/1024 [MB] (15 MBps) [2024-12-15T02:30:05.274Z] Copying: 96/1024 [MB] (10 MBps) [2024-12-15T02:30:06.217Z] Copying: 106/1024 [MB] (10 MBps) [2024-12-15T02:30:07.161Z] Copying: 116/1024 [MB] (10 MBps) [2024-12-15T02:30:08.103Z] Copying: 134/1024 [MB] (17 MBps) [2024-12-15T02:30:09.043Z] Copying: 148/1024 [MB] (14 MBps) [2024-12-15T02:30:09.985Z] Copying: 159/1024 [MB] (10 MBps) [2024-12-15T02:30:11.367Z] Copying: 169/1024 [MB] (10 MBps) [2024-12-15T02:30:12.308Z] Copying: 179/1024 [MB] (10 MBps) [2024-12-15T02:30:13.252Z] Copying: 197/1024 [MB] (17 MBps) [2024-12-15T02:30:14.197Z] Copying: 214/1024 [MB] (16 MBps) [2024-12-15T02:30:15.141Z] Copying: 231/1024 [MB] (17 MBps) [2024-12-15T02:30:16.082Z] Copying: 245/1024 [MB] (13 MBps) [2024-12-15T02:30:17.027Z] Copying: 268/1024 [MB] (23 MBps) [2024-12-15T02:30:17.971Z] Copying: 285/1024 [MB] (16 MBps) [2024-12-15T02:30:19.357Z] Copying: 305/1024 [MB] (20 MBps) [2024-12-15T02:30:20.302Z] Copying: 323/1024 [MB] (17 MBps) [2024-12-15T02:30:21.248Z] Copying: 347/1024 [MB] (23 MBps) [2024-12-15T02:30:22.192Z] Copying: 366/1024 [MB] (19 MBps) [2024-12-15T02:30:23.135Z] Copying: 384/1024 [MB] (17 MBps) [2024-12-15T02:30:24.080Z] Copying: 404/1024 [MB] (20 MBps) [2024-12-15T02:30:25.024Z] Copying: 423/1024 [MB] (19 MBps) [2024-12-15T02:30:25.967Z] Copying: 440/1024 [MB] (16 MBps) [2024-12-15T02:30:27.354Z] Copying: 454/1024 [MB] (14 MBps) [2024-12-15T02:30:28.299Z] Copying: 468/1024 [MB] (13 MBps) [2024-12-15T02:30:29.244Z] Copying: 481/1024 [MB] (12 MBps) [2024-12-15T02:30:30.217Z] Copying: 496/1024 [MB] (14 MBps) [2024-12-15T02:30:31.181Z] Copying: 506/1024 [MB] (10 MBps) [2024-12-15T02:30:32.127Z] Copying: 516/1024 [MB] (10 MBps) [2024-12-15T02:30:33.073Z] Copying: 536/1024 [MB] (19 MBps) [2024-12-15T02:30:34.016Z] Copying: 549/1024 [MB] (13 MBps) [2024-12-15T02:30:34.962Z] Copying: 560/1024 [MB] (10 MBps) [2024-12-15T02:30:36.350Z] Copying: 574/1024 [MB] (13 MBps) [2024-12-15T02:30:37.295Z] Copying: 590/1024 [MB] (16 MBps) [2024-12-15T02:30:38.236Z] Copying: 603/1024 [MB] (12 MBps) [2024-12-15T02:30:39.183Z] Copying: 620/1024 [MB] (16 MBps) [2024-12-15T02:30:40.130Z] Copying: 633/1024 [MB] (13 MBps) [2024-12-15T02:30:41.076Z] Copying: 652/1024 [MB] (18 MBps) [2024-12-15T02:30:42.021Z] Copying: 673/1024 [MB] (20 MBps) [2024-12-15T02:30:42.963Z] Copying: 694/1024 [MB] (21 MBps) [2024-12-15T02:30:44.345Z] Copying: 719/1024 [MB] (24 MBps) [2024-12-15T02:30:45.291Z] Copying: 743/1024 [MB] (24 MBps) [2024-12-15T02:30:46.234Z] Copying: 767/1024 [MB] (23 MBps) [2024-12-15T02:30:47.179Z] Copying: 787/1024 [MB] (20 MBps) [2024-12-15T02:30:48.123Z] Copying: 800/1024 [MB] (12 MBps) [2024-12-15T02:30:49.070Z] Copying: 822/1024 [MB] (22 MBps) [2024-12-15T02:30:50.013Z] Copying: 837/1024 [MB] (14 MBps) [2024-12-15T02:30:50.959Z] Copying: 856/1024 [MB] (18 MBps) [2024-12-15T02:30:52.348Z] Copying: 866/1024 [MB] (10 MBps) [2024-12-15T02:30:53.292Z] Copying: 878/1024 [MB] (12 MBps) [2024-12-15T02:30:54.233Z] Copying: 904/1024 [MB] (25 MBps) [2024-12-15T02:30:55.185Z] Copying: 920/1024 [MB] (15 MBps) [2024-12-15T02:30:56.167Z] Copying: 939/1024 [MB] (18 MBps) [2024-12-15T02:30:57.132Z] Copying: 955/1024 [MB] (16 MBps) [2024-12-15T02:30:58.075Z] Copying: 972/1024 [MB] (16 MBps) [2024-12-15T02:30:59.019Z] Copying: 994/1024 [MB] (22 MBps) [2024-12-15T02:30:59.962Z] Copying: 1010/1024 [MB] (16 MBps) [2024-12-15T02:31:00.224Z] Copying: 1024/1024 [MB] (average 16 MBps)[2024-12-15 02:31:00.196680] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.459 [2024-12-15 02:31:00.197018] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinit core IO channel 00:35:35.459 [2024-12-15 02:31:00.197046] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.004 ms 00:35:35.459 [2024-12-15 02:31:00.197056] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.459 [2024-12-15 02:31:00.197092] mngt/ftl_mngt_ioch.c: 136:io_channel_destroy_cb: *NOTICE*: [FTL][ftl0] FTL IO channel destroy on app_thread 00:35:35.459 [2024-12-15 02:31:00.200697] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.459 [2024-12-15 02:31:00.200745] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Unregister IO device 00:35:35.459 [2024-12-15 02:31:00.200760] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 3.586 ms 00:35:35.459 [2024-12-15 02:31:00.200777] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.459 [2024-12-15 02:31:00.201022] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.459 [2024-12-15 02:31:00.201032] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Stop core poller 00:35:35.459 [2024-12-15 02:31:00.201043] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.214 ms 00:35:35.459 [2024-12-15 02:31:00.201051] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.459 [2024-12-15 02:31:00.201082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.459 [2024-12-15 02:31:00.201092] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Fast persist NV cache metadata 00:35:35.459 [2024-12-15 02:31:00.201101] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.005 ms 00:35:35.459 [2024-12-15 02:31:00.201110] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.459 [2024-12-15 02:31:00.201172] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.459 [2024-12-15 02:31:00.201185] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Set FTL SHM clean state 00:35:35.459 [2024-12-15 02:31:00.201206] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.026 ms 00:35:35.459 [2024-12-15 02:31:00.201216] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.459 [2024-12-15 02:31:00.201230] ftl_debug.c: 165:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Bands validity: 00:35:35.459 [2024-12-15 02:31:00.201243] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 1: 131072 / 261120 wr_cnt: 1 state: open 00:35:35.459 [2024-12-15 02:31:00.201254] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 2: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201263] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 3: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201271] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 4: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201279] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 5: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201287] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 6: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201295] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 7: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201303] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 8: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201311] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 9: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201320] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 10: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201328] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 11: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201336] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 12: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201344] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 13: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201351] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 14: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201359] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 15: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201367] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 16: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201376] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 17: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201386] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 18: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201394] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 19: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201402] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 20: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201409] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 21: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201417] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 22: 0 / 261120 wr_cnt: 0 state: free 00:35:35.459 [2024-12-15 02:31:00.201425] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 23: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201433] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 24: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201440] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 25: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201449] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 26: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201457] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 27: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201464] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 28: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201639] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 29: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201648] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 30: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201656] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 31: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201664] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 32: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201672] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 33: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201679] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 34: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201687] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 35: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201695] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 36: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201702] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 37: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201710] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 38: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201717] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 39: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201725] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 40: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201732] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 41: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201741] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 42: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201749] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 43: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201756] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 44: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201764] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 45: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201780] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 46: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201788] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 47: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201795] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 48: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201803] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 49: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201811] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 50: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201820] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 51: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201828] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 52: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201835] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 53: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201843] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 54: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201850] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 55: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201858] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 56: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201865] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 57: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201875] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 58: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201883] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 59: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201890] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 60: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201898] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 61: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201907] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 62: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201915] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 63: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201923] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 64: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201931] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 65: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201939] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 66: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201947] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 67: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201954] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 68: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201962] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 69: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201969] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 70: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201977] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 71: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201985] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 72: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.201992] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 73: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202001] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 74: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202009] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 75: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202017] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 76: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202024] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 77: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202033] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 78: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202041] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 79: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202048] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 80: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202068] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 81: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202078] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 82: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202086] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 83: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202094] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 84: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202101] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 85: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202109] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 86: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202117] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 87: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202125] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 88: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202133] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 89: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202147] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 90: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202155] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 91: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202163] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 92: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202171] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 93: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202179] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 94: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202187] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 95: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202207] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 96: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202215] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 97: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202223] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 98: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202231] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 99: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202239] ftl_debug.c: 167:ftl_dev_dump_bands: *NOTICE*: [FTL][ftl0] Band 100: 0 / 261120 wr_cnt: 0 state: free 00:35:35.460 [2024-12-15 02:31:00.202255] ftl_debug.c: 211:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] 00:35:35.460 [2024-12-15 02:31:00.202263] ftl_debug.c: 212:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] device UUID: 4a227f38-bb55-495b-8e3b-2041cd8dcaa2 00:35:35.461 [2024-12-15 02:31:00.202271] ftl_debug.c: 213:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total valid LBAs: 131072 00:35:35.461 [2024-12-15 02:31:00.202279] ftl_debug.c: 214:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] total writes: 3104 00:35:35.461 [2024-12-15 02:31:00.202287] ftl_debug.c: 215:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] user writes: 3072 00:35:35.461 [2024-12-15 02:31:00.202296] ftl_debug.c: 216:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] WAF: 1.0104 00:35:35.461 [2024-12-15 02:31:00.202308] ftl_debug.c: 218:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] limits: 00:35:35.461 [2024-12-15 02:31:00.202317] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] crit: 0 00:35:35.461 [2024-12-15 02:31:00.202324] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] high: 0 00:35:35.461 [2024-12-15 02:31:00.202331] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] low: 0 00:35:35.461 [2024-12-15 02:31:00.202338] ftl_debug.c: 220:ftl_dev_dump_stats: *NOTICE*: [FTL][ftl0] start: 0 00:35:35.461 [2024-12-15 02:31:00.202346] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.461 [2024-12-15 02:31:00.202354] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Dump statistics 00:35:35.461 [2024-12-15 02:31:00.202371] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 1.116 ms 00:35:35.461 [2024-12-15 02:31:00.202379] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.461 [2024-12-15 02:31:00.217641] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.461 [2024-12-15 02:31:00.217692] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize L2P 00:35:35.461 [2024-12-15 02:31:00.217711] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 15.244 ms 00:35:35.461 [2024-12-15 02:31:00.217719] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.461 [2024-12-15 02:31:00.218249] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Action 00:35:35.461 [2024-12-15 02:31:00.218276] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Deinitialize P2L checkpointing 00:35:35.461 [2024-12-15 02:31:00.218287] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.405 ms 00:35:35.461 [2024-12-15 02:31:00.218296] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.256082] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.256135] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize reloc 00:35:35.722 [2024-12-15 02:31:00.256148] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.256157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.256252] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.256262] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands metadata 00:35:35.722 [2024-12-15 02:31:00.256272] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.256283] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.256356] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.256372] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize trim map 00:35:35.722 [2024-12-15 02:31:00.256382] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.256391] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.256409] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.256418] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize valid map 00:35:35.722 [2024-12-15 02:31:00.256427] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.256435] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.342394] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.342453] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize NV cache 00:35:35.722 [2024-12-15 02:31:00.342466] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.342474] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.411894] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.411954] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize metadata 00:35:35.722 [2024-12-15 02:31:00.411966] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.411974] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.412056] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.412066] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize core IO channel 00:35:35.722 [2024-12-15 02:31:00.412082] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.412091] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.412131] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.412141] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize bands 00:35:35.722 [2024-12-15 02:31:00.412150] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.412157] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.412260] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.412271] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize memory pools 00:35:35.722 [2024-12-15 02:31:00.412280] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.412292] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.412319] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.412328] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Initialize superblock 00:35:35.722 [2024-12-15 02:31:00.412337] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.412345] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.412385] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.412395] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open cache bdev 00:35:35.722 [2024-12-15 02:31:00.412404] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.412415] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.412462] mngt/ftl_mngt.c: 427:trace_step: *NOTICE*: [FTL][ftl0] Rollback 00:35:35.722 [2024-12-15 02:31:00.412471] mngt/ftl_mngt.c: 428:trace_step: *NOTICE*: [FTL][ftl0] name: Open base bdev 00:35:35.722 [2024-12-15 02:31:00.412480] mngt/ftl_mngt.c: 430:trace_step: *NOTICE*: [FTL][ftl0] duration: 0.000 ms 00:35:35.722 [2024-12-15 02:31:00.412489] mngt/ftl_mngt.c: 431:trace_step: *NOTICE*: [FTL][ftl0] status: 0 00:35:35.722 [2024-12-15 02:31:00.412625] mngt/ftl_mngt.c: 459:finish_msg: *NOTICE*: [FTL][ftl0] Management process finished, name 'FTL fast shutdown', duration = 215.916 ms, result 0 00:35:36.664 00:35:36.664 00:35:36.664 02:31:01 ftl.ftl_restore_fast -- ftl/restore.sh@82 -- # md5sum -c /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:38.577 /home/vagrant/spdk_repo/spdk/test/ftl/testfile: OK 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@84 -- # trap - SIGINT SIGTERM EXIT 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@85 -- # restore_kill 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@28 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@29 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/testfile.md5 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@30 -- # rm -f /home/vagrant/spdk_repo/spdk/test/ftl/config/ftl.json 00:35:38.577 Process with pid 85766 is not found 00:35:38.577 Remove shared memory files 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@32 -- # killprocess 85766 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@954 -- # '[' -z 85766 ']' 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@958 -- # kill -0 85766 00:35:38.577 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (85766) - No such process 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@981 -- # echo 'Process with pid 85766 is not found' 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/restore.sh@33 -- # remove_shm 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:38.577 02:31:03 ftl.ftl_restore_fast -- ftl/common.sh@205 -- # rm -f rm -f 00:35:38.578 02:31:03 ftl.ftl_restore_fast -- ftl/common.sh@206 -- # rm -f rm -f /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_band_md /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_l2p_l1 /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_l2p_l2 /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_l2p_l2_ctx /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_nvc_md /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_p2l_pool /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_sb /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_sb_shm /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_trim_bitmap /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_trim_log /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_trim_md /dev/hugepages/ftl_4a227f38-bb55-495b-8e3b-2041cd8dcaa2_vmap 00:35:38.578 02:31:03 ftl.ftl_restore_fast -- ftl/common.sh@207 -- # rm -f rm -f 00:35:38.578 02:31:03 ftl.ftl_restore_fast -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:38.578 02:31:03 ftl.ftl_restore_fast -- ftl/common.sh@209 -- # rm -f rm -f 00:35:38.578 00:35:38.578 real 4m38.986s 00:35:38.578 user 4m26.733s 00:35:38.578 sys 0m12.015s 00:35:38.578 02:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:38.578 02:31:03 ftl.ftl_restore_fast -- common/autotest_common.sh@10 -- # set +x 00:35:38.578 ************************************ 00:35:38.578 END TEST ftl_restore_fast 00:35:38.578 ************************************ 00:35:38.839 02:31:03 ftl -- ftl/ftl.sh@1 -- # at_ftl_exit 00:35:38.839 02:31:03 ftl -- ftl/ftl.sh@14 -- # killprocess 76788 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@954 -- # '[' -z 76788 ']' 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@958 -- # kill -0 76788 00:35:38.839 /home/vagrant/spdk_repo/spdk/test/common/autotest_common.sh: line 958: kill: (76788) - No such process 00:35:38.839 Process with pid 76788 is not found 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@981 -- # echo 'Process with pid 76788 is not found' 00:35:38.839 02:31:03 ftl -- ftl/ftl.sh@17 -- # [[ -n 0000:00:11.0 ]] 00:35:38.839 02:31:03 ftl -- ftl/ftl.sh@19 -- # spdk_tgt_pid=88590 00:35:38.839 02:31:03 ftl -- ftl/ftl.sh@20 -- # waitforlisten 88590 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@835 -- # '[' -z 88590 ']' 00:35:38.839 Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock... 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@839 -- # local rpc_addr=/var/tmp/spdk.sock 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@840 -- # local max_retries=100 00:35:38.839 02:31:03 ftl -- ftl/ftl.sh@18 -- # /home/vagrant/spdk_repo/spdk/build/bin/spdk_tgt 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@842 -- # echo 'Waiting for process to start up and listen on UNIX domain socket /var/tmp/spdk.sock...' 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@844 -- # xtrace_disable 00:35:38.839 02:31:03 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:38.839 [2024-12-15 02:31:03.469739] Starting SPDK v25.01-pre git sha1 e01cb43b8 / DPDK 24.03.0 initialization... 00:35:38.839 [2024-12-15 02:31:03.469889] [ DPDK EAL parameters: spdk_tgt --no-shconf -c 0x1 --huge-unlink --no-telemetry --log-level=lib.eal:6 --log-level=lib.cryptodev:5 --log-level=lib.power:5 --log-level=user1:6 --iova-mode=pa --base-virtaddr=0x200000000000 --match-allocations --file-prefix=spdk_pid88590 ] 00:35:39.100 [2024-12-15 02:31:03.639970] app.c: 919:spdk_app_start: *NOTICE*: Total cores available: 1 00:35:39.100 [2024-12-15 02:31:03.761971] reactor.c: 995:reactor_run: *NOTICE*: Reactor started on core 0 00:35:40.041 02:31:04 ftl -- common/autotest_common.sh@864 -- # (( i == 0 )) 00:35:40.041 02:31:04 ftl -- common/autotest_common.sh@868 -- # return 0 00:35:40.041 02:31:04 ftl -- ftl/ftl.sh@21 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_nvme_attach_controller -b nvme0 -t PCIe -a 0000:00:11.0 00:35:40.041 nvme0n1 00:35:40.041 02:31:04 ftl -- ftl/ftl.sh@22 -- # clear_lvols 00:35:40.041 02:31:04 ftl -- ftl/common.sh@28 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_get_lvstores 00:35:40.041 02:31:04 ftl -- ftl/common.sh@28 -- # jq -r '.[] | .uuid' 00:35:40.301 02:31:04 ftl -- ftl/common.sh@28 -- # stores=8f6c236a-27f0-465d-80b3-b21d1c704e4d 00:35:40.301 02:31:04 ftl -- ftl/common.sh@29 -- # for lvs in $stores 00:35:40.301 02:31:04 ftl -- ftl/common.sh@30 -- # /home/vagrant/spdk_repo/spdk/scripts/rpc.py bdev_lvol_delete_lvstore -u 8f6c236a-27f0-465d-80b3-b21d1c704e4d 00:35:40.560 02:31:05 ftl -- ftl/ftl.sh@23 -- # killprocess 88590 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@954 -- # '[' -z 88590 ']' 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@958 -- # kill -0 88590 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@959 -- # uname 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@959 -- # '[' Linux = Linux ']' 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@960 -- # ps --no-headers -o comm= 88590 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@960 -- # process_name=reactor_0 00:35:40.560 killing process with pid 88590 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@964 -- # '[' reactor_0 = sudo ']' 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@972 -- # echo 'killing process with pid 88590' 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@973 -- # kill 88590 00:35:40.560 02:31:05 ftl -- common/autotest_common.sh@978 -- # wait 88590 00:35:42.470 02:31:06 ftl -- ftl/ftl.sh@27 -- # /home/vagrant/spdk_repo/spdk/scripts/setup.sh reset 00:35:42.470 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:42.470 Waiting for block devices as requested 00:35:42.470 0000:00:11.0 (1b36 0010): uio_pci_generic -> nvme 00:35:42.470 0000:00:10.0 (1b36 0010): uio_pci_generic -> nvme 00:35:42.731 0000:00:12.0 (1b36 0010): uio_pci_generic -> nvme 00:35:42.731 0000:00:13.0 (1b36 0010): uio_pci_generic -> nvme 00:35:48.020 * Events for some block/disk devices (0000:00:13.0) were not caught, they may be missing 00:35:48.020 02:31:12 ftl -- ftl/ftl.sh@28 -- # remove_shm 00:35:48.020 Remove shared memory files 00:35:48.020 02:31:12 ftl -- ftl/common.sh@204 -- # echo Remove shared memory files 00:35:48.020 02:31:12 ftl -- ftl/common.sh@205 -- # rm -f rm -f 00:35:48.020 02:31:12 ftl -- ftl/common.sh@206 -- # rm -f rm -f 00:35:48.020 02:31:12 ftl -- ftl/common.sh@207 -- # rm -f rm -f 00:35:48.020 02:31:12 ftl -- ftl/common.sh@208 -- # rm -f rm -f /dev/shm/iscsi 00:35:48.020 02:31:12 ftl -- ftl/common.sh@209 -- # rm -f rm -f 00:35:48.020 00:35:48.020 real 18m0.248s 00:35:48.020 user 19m54.153s 00:35:48.020 sys 1m29.993s 00:35:48.020 02:31:12 ftl -- common/autotest_common.sh@1130 -- # xtrace_disable 00:35:48.020 ************************************ 00:35:48.020 02:31:12 ftl -- common/autotest_common.sh@10 -- # set +x 00:35:48.020 END TEST ftl 00:35:48.020 ************************************ 00:35:48.020 02:31:12 -- spdk/autotest.sh@346 -- # '[' 0 -eq 1 ']' 00:35:48.020 02:31:12 -- spdk/autotest.sh@350 -- # '[' 0 -eq 1 ']' 00:35:48.020 02:31:12 -- spdk/autotest.sh@355 -- # '[' 0 -eq 1 ']' 00:35:48.020 02:31:12 -- spdk/autotest.sh@359 -- # '[' 0 -eq 1 ']' 00:35:48.020 02:31:12 -- spdk/autotest.sh@366 -- # [[ 0 -eq 1 ]] 00:35:48.020 02:31:12 -- spdk/autotest.sh@370 -- # [[ 0 -eq 1 ]] 00:35:48.020 02:31:12 -- spdk/autotest.sh@374 -- # [[ 0 -eq 1 ]] 00:35:48.020 02:31:12 -- spdk/autotest.sh@378 -- # [[ '' -eq 1 ]] 00:35:48.020 02:31:12 -- spdk/autotest.sh@385 -- # trap - SIGINT SIGTERM EXIT 00:35:48.020 02:31:12 -- spdk/autotest.sh@387 -- # timing_enter post_cleanup 00:35:48.020 02:31:12 -- common/autotest_common.sh@726 -- # xtrace_disable 00:35:48.020 02:31:12 -- common/autotest_common.sh@10 -- # set +x 00:35:48.020 02:31:12 -- spdk/autotest.sh@388 -- # autotest_cleanup 00:35:48.020 02:31:12 -- common/autotest_common.sh@1396 -- # local autotest_es=0 00:35:48.020 02:31:12 -- common/autotest_common.sh@1397 -- # xtrace_disable 00:35:48.020 02:31:12 -- common/autotest_common.sh@10 -- # set +x 00:35:49.406 INFO: APP EXITING 00:35:49.406 INFO: killing all VMs 00:35:49.406 INFO: killing vhost app 00:35:49.406 INFO: EXIT DONE 00:35:49.667 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:49.928 0000:00:11.0 (1b36 0010): Already using the nvme driver 00:35:49.928 0000:00:10.0 (1b36 0010): Already using the nvme driver 00:35:50.189 0000:00:12.0 (1b36 0010): Already using the nvme driver 00:35:50.189 0000:00:13.0 (1b36 0010): Already using the nvme driver 00:35:50.462 0000:00:03.0 (1af4 1001): Active devices: mount@vda:vda2,mount@vda:vda3,mount@vda:vda5, so not binding PCI dev 00:35:50.725 Cleaning 00:35:50.725 Removing: /var/run/dpdk/spdk0/config 00:35:50.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-0 00:35:50.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-1 00:35:50.725 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-2 00:35:50.986 Removing: /var/run/dpdk/spdk0/fbarray_memseg-2048k-0-3 00:35:50.986 Removing: /var/run/dpdk/spdk0/fbarray_memzone 00:35:50.986 Removing: /var/run/dpdk/spdk0/hugepage_info 00:35:50.986 Removing: /var/run/dpdk/spdk0 00:35:50.986 Removing: /var/run/dpdk/spdk_pid58746 00:35:50.986 Removing: /var/run/dpdk/spdk_pid58954 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59161 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59254 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59292 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59410 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59428 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59616 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59715 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59805 00:35:50.986 Removing: /var/run/dpdk/spdk_pid59916 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60008 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60047 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60084 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60149 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60244 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60669 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60733 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60785 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60801 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60892 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60908 00:35:50.986 Removing: /var/run/dpdk/spdk_pid60999 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61015 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61068 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61086 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61139 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61156 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61306 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61343 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61426 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61593 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61677 00:35:50.986 Removing: /var/run/dpdk/spdk_pid61708 00:35:50.986 Removing: /var/run/dpdk/spdk_pid62140 00:35:50.986 Removing: /var/run/dpdk/spdk_pid62238 00:35:50.986 Removing: /var/run/dpdk/spdk_pid62347 00:35:50.986 Removing: /var/run/dpdk/spdk_pid62402 00:35:50.986 Removing: /var/run/dpdk/spdk_pid62433 00:35:50.986 Removing: /var/run/dpdk/spdk_pid62512 00:35:50.986 Removing: /var/run/dpdk/spdk_pid63134 00:35:50.986 Removing: /var/run/dpdk/spdk_pid63165 00:35:50.986 Removing: /var/run/dpdk/spdk_pid63634 00:35:50.986 Removing: /var/run/dpdk/spdk_pid63732 00:35:50.986 Removing: /var/run/dpdk/spdk_pid63841 00:35:50.987 Removing: /var/run/dpdk/spdk_pid63894 00:35:50.987 Removing: /var/run/dpdk/spdk_pid63914 00:35:50.987 Removing: /var/run/dpdk/spdk_pid63945 00:35:50.987 Removing: /var/run/dpdk/spdk_pid65782 00:35:50.987 Removing: /var/run/dpdk/spdk_pid65914 00:35:50.987 Removing: /var/run/dpdk/spdk_pid65923 00:35:50.987 Removing: /var/run/dpdk/spdk_pid65935 00:35:50.987 Removing: /var/run/dpdk/spdk_pid65974 00:35:50.987 Removing: /var/run/dpdk/spdk_pid65978 00:35:50.987 Removing: /var/run/dpdk/spdk_pid65990 00:35:50.987 Removing: /var/run/dpdk/spdk_pid66035 00:35:50.987 Removing: /var/run/dpdk/spdk_pid66039 00:35:50.987 Removing: /var/run/dpdk/spdk_pid66051 00:35:50.987 Removing: /var/run/dpdk/spdk_pid66101 00:35:50.987 Removing: /var/run/dpdk/spdk_pid66105 00:35:50.987 Removing: /var/run/dpdk/spdk_pid66117 00:35:50.987 Removing: /var/run/dpdk/spdk_pid67508 00:35:50.987 Removing: /var/run/dpdk/spdk_pid67605 00:35:50.987 Removing: /var/run/dpdk/spdk_pid69013 00:35:50.987 Removing: /var/run/dpdk/spdk_pid70755 00:35:50.987 Removing: /var/run/dpdk/spdk_pid70824 00:35:50.987 Removing: /var/run/dpdk/spdk_pid70907 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71011 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71103 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71204 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71273 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71349 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71453 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71546 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71646 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71716 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71791 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71895 00:35:50.987 Removing: /var/run/dpdk/spdk_pid71987 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72088 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72157 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72232 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72336 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72433 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72529 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72602 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72672 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72746 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72826 00:35:50.987 Removing: /var/run/dpdk/spdk_pid72929 00:35:50.987 Removing: /var/run/dpdk/spdk_pid73020 00:35:50.987 Removing: /var/run/dpdk/spdk_pid73115 00:35:51.251 Removing: /var/run/dpdk/spdk_pid73183 00:35:51.251 Removing: /var/run/dpdk/spdk_pid73263 00:35:51.251 Removing: /var/run/dpdk/spdk_pid73337 00:35:51.251 Removing: /var/run/dpdk/spdk_pid73407 00:35:51.251 Removing: /var/run/dpdk/spdk_pid73510 00:35:51.251 Removing: /var/run/dpdk/spdk_pid73605 00:35:51.251 Removing: /var/run/dpdk/spdk_pid73750 00:35:51.251 Removing: /var/run/dpdk/spdk_pid74034 00:35:51.251 Removing: /var/run/dpdk/spdk_pid74071 00:35:51.251 Removing: /var/run/dpdk/spdk_pid74528 00:35:51.251 Removing: /var/run/dpdk/spdk_pid74712 00:35:51.251 Removing: /var/run/dpdk/spdk_pid74814 00:35:51.251 Removing: /var/run/dpdk/spdk_pid74929 00:35:51.251 Removing: /var/run/dpdk/spdk_pid74971 00:35:51.251 Removing: /var/run/dpdk/spdk_pid74996 00:35:51.251 Removing: /var/run/dpdk/spdk_pid75292 00:35:51.251 Removing: /var/run/dpdk/spdk_pid75362 00:35:51.251 Removing: /var/run/dpdk/spdk_pid75435 00:35:51.251 Removing: /var/run/dpdk/spdk_pid75831 00:35:51.251 Removing: /var/run/dpdk/spdk_pid75982 00:35:51.251 Removing: /var/run/dpdk/spdk_pid76788 00:35:51.251 Removing: /var/run/dpdk/spdk_pid76921 00:35:51.251 Removing: /var/run/dpdk/spdk_pid77085 00:35:51.251 Removing: /var/run/dpdk/spdk_pid77204 00:35:51.251 Removing: /var/run/dpdk/spdk_pid77511 00:35:51.251 Removing: /var/run/dpdk/spdk_pid77813 00:35:51.251 Removing: /var/run/dpdk/spdk_pid78159 00:35:51.251 Removing: /var/run/dpdk/spdk_pid78336 00:35:51.251 Removing: /var/run/dpdk/spdk_pid78495 00:35:51.251 Removing: /var/run/dpdk/spdk_pid78547 00:35:51.251 Removing: /var/run/dpdk/spdk_pid78746 00:35:51.251 Removing: /var/run/dpdk/spdk_pid78771 00:35:51.251 Removing: /var/run/dpdk/spdk_pid78818 00:35:51.251 Removing: /var/run/dpdk/spdk_pid79033 00:35:51.251 Removing: /var/run/dpdk/spdk_pid79270 00:35:51.251 Removing: /var/run/dpdk/spdk_pid79863 00:35:51.251 Removing: /var/run/dpdk/spdk_pid80621 00:35:51.251 Removing: /var/run/dpdk/spdk_pid81154 00:35:51.251 Removing: /var/run/dpdk/spdk_pid81939 00:35:51.251 Removing: /var/run/dpdk/spdk_pid82082 00:35:51.251 Removing: /var/run/dpdk/spdk_pid82158 00:35:51.251 Removing: /var/run/dpdk/spdk_pid82535 00:35:51.251 Removing: /var/run/dpdk/spdk_pid82593 00:35:51.251 Removing: /var/run/dpdk/spdk_pid83505 00:35:51.251 Removing: /var/run/dpdk/spdk_pid83981 00:35:51.251 Removing: /var/run/dpdk/spdk_pid84729 00:35:51.251 Removing: /var/run/dpdk/spdk_pid84858 00:35:51.251 Removing: /var/run/dpdk/spdk_pid84900 00:35:51.251 Removing: /var/run/dpdk/spdk_pid84958 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85016 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85081 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85271 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85365 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85432 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85488 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85528 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85602 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85766 00:35:51.251 Removing: /var/run/dpdk/spdk_pid85995 00:35:51.251 Removing: /var/run/dpdk/spdk_pid86432 00:35:51.251 Removing: /var/run/dpdk/spdk_pid87112 00:35:51.251 Removing: /var/run/dpdk/spdk_pid87902 00:35:51.251 Removing: /var/run/dpdk/spdk_pid88590 00:35:51.251 Clean 00:35:51.251 02:31:15 -- common/autotest_common.sh@1453 -- # return 0 00:35:51.251 02:31:15 -- spdk/autotest.sh@389 -- # timing_exit post_cleanup 00:35:51.251 02:31:15 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:51.251 02:31:15 -- common/autotest_common.sh@10 -- # set +x 00:35:51.513 02:31:16 -- spdk/autotest.sh@391 -- # timing_exit autotest 00:35:51.513 02:31:16 -- common/autotest_common.sh@732 -- # xtrace_disable 00:35:51.513 02:31:16 -- common/autotest_common.sh@10 -- # set +x 00:35:51.513 02:31:16 -- spdk/autotest.sh@392 -- # chmod a+r /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:35:51.513 02:31:16 -- spdk/autotest.sh@394 -- # [[ -f /home/vagrant/spdk_repo/spdk/../output/udev.log ]] 00:35:51.513 02:31:16 -- spdk/autotest.sh@394 -- # rm -f /home/vagrant/spdk_repo/spdk/../output/udev.log 00:35:51.513 02:31:16 -- spdk/autotest.sh@396 -- # [[ y == y ]] 00:35:51.513 02:31:16 -- spdk/autotest.sh@398 -- # hostname 00:35:51.513 02:31:16 -- spdk/autotest.sh@398 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -c --no-external -d /home/vagrant/spdk_repo/spdk -t fedora39-cloud-1721788873-2326 -o /home/vagrant/spdk_repo/spdk/../output/cov_test.info 00:35:51.513 geninfo: WARNING: invalid characters removed from testname! 00:36:18.100 02:31:41 -- spdk/autotest.sh@399 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -a /home/vagrant/spdk_repo/spdk/../output/cov_base.info -a /home/vagrant/spdk_repo/spdk/../output/cov_test.info -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:20.649 02:31:44 -- spdk/autotest.sh@400 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/dpdk/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:22.621 02:31:47 -- spdk/autotest.sh@404 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info --ignore-errors unused,unused '/usr/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:25.172 02:31:49 -- spdk/autotest.sh@405 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/examples/vmd/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:28.478 02:31:52 -- spdk/autotest.sh@406 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_lspci/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:31.027 02:31:55 -- spdk/autotest.sh@407 -- # lcov --rc lcov_branch_coverage=1 --rc lcov_function_coverage=1 --rc genhtml_branch_coverage=1 --rc genhtml_function_coverage=1 --rc genhtml_legend=1 --rc geninfo_all_blocks=1 --rc geninfo_unexecuted_blocks=1 -q -r /home/vagrant/spdk_repo/spdk/../output/cov_total.info '*/app/spdk_top/*' -o /home/vagrant/spdk_repo/spdk/../output/cov_total.info 00:36:33.578 02:31:58 -- spdk/autotest.sh@408 -- # rm -f cov_base.info cov_test.info OLD_STDOUT OLD_STDERR 00:36:33.578 02:31:58 -- spdk/autorun.sh@1 -- $ timing_finish 00:36:33.578 02:31:58 -- common/autotest_common.sh@738 -- $ [[ -e /home/vagrant/spdk_repo/spdk/../output/timing.txt ]] 00:36:33.578 02:31:58 -- common/autotest_common.sh@740 -- $ flamegraph=/usr/local/FlameGraph/flamegraph.pl 00:36:33.578 02:31:58 -- common/autotest_common.sh@741 -- $ [[ -x /usr/local/FlameGraph/flamegraph.pl ]] 00:36:33.578 02:31:58 -- common/autotest_common.sh@744 -- $ /usr/local/FlameGraph/flamegraph.pl --title 'Build Timing' --nametype Step: --countname seconds /home/vagrant/spdk_repo/spdk/../output/timing.txt 00:36:33.578 + [[ -n 5029 ]] 00:36:33.578 + sudo kill 5029 00:36:33.850 [Pipeline] } 00:36:33.866 [Pipeline] // timeout 00:36:33.872 [Pipeline] } 00:36:33.886 [Pipeline] // stage 00:36:33.892 [Pipeline] } 00:36:33.907 [Pipeline] // catchError 00:36:33.917 [Pipeline] stage 00:36:33.920 [Pipeline] { (Stop VM) 00:36:33.932 [Pipeline] sh 00:36:34.220 + vagrant halt 00:36:36.765 ==> default: Halting domain... 00:36:43.374 [Pipeline] sh 00:36:43.656 + vagrant destroy -f 00:36:46.198 ==> default: Removing domain... 00:36:46.783 [Pipeline] sh 00:36:47.067 + mv output /var/jenkins/workspace/nvme-vg-autotest/output 00:36:47.077 [Pipeline] } 00:36:47.092 [Pipeline] // stage 00:36:47.098 [Pipeline] } 00:36:47.112 [Pipeline] // dir 00:36:47.117 [Pipeline] } 00:36:47.131 [Pipeline] // wrap 00:36:47.137 [Pipeline] } 00:36:47.149 [Pipeline] // catchError 00:36:47.158 [Pipeline] stage 00:36:47.160 [Pipeline] { (Epilogue) 00:36:47.172 [Pipeline] sh 00:36:47.459 + jbp/jenkins/jjb-config/jobs/scripts/compress_artifacts.sh 00:36:52.798 [Pipeline] catchError 00:36:52.800 [Pipeline] { 00:36:52.813 [Pipeline] sh 00:36:53.098 + jbp/jenkins/jjb-config/jobs/scripts/check_artifacts_size.sh 00:36:53.098 Artifacts sizes are good 00:36:53.109 [Pipeline] } 00:36:53.123 [Pipeline] // catchError 00:36:53.134 [Pipeline] archiveArtifacts 00:36:53.141 Archiving artifacts 00:36:53.248 [Pipeline] cleanWs 00:36:53.260 [WS-CLEANUP] Deleting project workspace... 00:36:53.260 [WS-CLEANUP] Deferred wipeout is used... 00:36:53.267 [WS-CLEANUP] done 00:36:53.269 [Pipeline] } 00:36:53.284 [Pipeline] // stage 00:36:53.288 [Pipeline] } 00:36:53.301 [Pipeline] // node 00:36:53.306 [Pipeline] End of Pipeline 00:36:53.343 Finished: SUCCESS